all posts tagged Front


by on July 22, 2013

Nutanix vs. GlusterFS or Projects are not Products

Anyone who knows me knows, that I’ve been a VMware user for a long time. I’ve spent a large chunk of my career building virtualization solutions for different companies based on VMware tech. I’ve been active in the VMware community, and I’ve got to say it’s one of the healthiest I’ve seen in a long time. There’s a ton of interesting things going on and a robust and solid ecosphere of partners developing really great products with VMware as a core component of their products.

Those of you who REALLY know me also know that I’m a passionate proponent for Open Source and Free Software wherever it makes sense. Open Source empowers customers, users, and developers to participate more intimately with how their software evolves. I’m also a fan of organizations paying for the Free Software they consume as well, which not only addresses support requirements for most organizations, but it also encourages organizations to “have a stake” in how the products are being built, and how those projects evolve.

That last sentence is critical. Projects are not products. As worlds collide, and as proprietary companies not normally exposed to or involved with traditional open source projects start getting involved, it’s easy for them to occasionally say things that will cause those of us in the community to scratch our heads. But what do we mean when we say “projects aren’t products”?

What does that even mean?
First, take a look at this:
rht-lifecycle

As an example, let’s say a Red Hat engineer (or a customer, or a Fedora contributor, or a partner vendor like IBM, or ANYONE really) wants to add a cool new feature specifically to RHEL, Red Hat’s flagship product. The new feature would first be added to Fedora, tested, hardened, and when stable, if selected, rolled into the downstream product, RHEL. This “upstream first” methodology keeps the “community of developers” front and center, and it doesn’t hold things back from what is contributed to the Open Source project.[1]

Similarly, new features and functionality for Red Hat Enterprise Storage get added to Gluster first. Gluster is the project, whereas Red Hat Enterprise Storage is the product. This isn’t stopping companies from deploying projects into production, many companies do (I’m guessing to the chagrin of sales folks everywhere) but overall, it helps the community with users and developers of all types, as feedback and bug reports are public for EVERYONE. Remember, projects like Gluster can’t exist behind a wall, because contributors come from lots of different companies and backgrounds. Everyone is welcome to submit code and patches, as well as file bug reports. This (IMHO) is what makes these Open Source projects great, and also what tends to drive the most confusion with proprietary companies trying to interact with them in the wild.

nutanix_tweet

This was a recent post
by Binny Gill, the Director of Engineering for Nutanix. I won’t get into the stream of back and forth that happened after this was posted, I just wanted to share my thoughts about this from someone who has chosen to live in both worlds for a long time. [2]

It’s easy to bash on Open Source projects. With all of the mailing lists for users and developers public and if you treat them as competitive, you’ve got a target rich environment from which to pull all the ammo you need to smash them into the dirt. Every single one. That’s by design though, as most projects are interested in developing in the open. The beauty of this is, if you want to engage the community, even if you feel like you work at a “competitor”, you can do so! Join the mailing list, grab the source, watch, learn. That’s what it’s there for. I’d actually encourage Nutanix employees to download and test out the new libgfapi and kvm-qemu integration against NDFS. I know Nutanix has solid KVM support and I know their engineers are rock stars, so it would actually be pretty awesome to see a side by side comparison. Data data data!

Comparing products with projects comes off like a cheap shot. For example, grabbing a single bug report from an outdated version of Gluster and claiming Gluster itself is not enterprise ready. If NDFS were an open source project, including an upstream project with publicly available user and developer mailing lists, where all new patches and bugs were reported, I’m willing to bet there would be plenty of “non-enterprise ready” commentary available. But it doesn’t. It isn’t Open Source. The support logs aren’t public. It gets to only put its best foot forward in the pubic sphere.

Personally, I’d love to see Nutanix add some real gasoline to their support of Open Source by contributing NDFS back to the community as an upstream project. Especially, if it is the best of the best. Then we’d ALL be able to move beyond twitfights and one-sided performance testing (although I’m still interested in what the numbers would look like). It would also add another solid option for open source enterprise storage product offerings. The more the merrier. With the adoption rates of Nutanix within the VMware scope, I don’t doubt it’s awesome-sauce with a side of amazing. I can’t imagine how Nutanix contributing an Open Source storage component would impact the converged hardware sales and support space they’re currently rocking. While I can’t see it happening any time soon, I’d love to rock some Open Source NDFS love in my home lab.

Personally, I’m thrilled to see more closed source vendors consuming and supporting open source virtualization projects. I think is a safe way to get started. More involvement should be encouraged. Companies like Nutanix already understand the value prop of Open Source in their space and are focused on making things like KVM and OpenStack rock with NDFS and the Nutanix platform. It’s a hell of a start and it makes sense. Some of the most exciting innovative technologies evolving today are Open Source projects. And most of them are occurring within Upstream Open Source Projects.

In the meantime, I’ll just stick with KVM and Gluster in my home lab and do what I can to improve the upstream projects and look forward to the conversations at VMworld this year.

[1] I’m aware there are TONS of different ways to do open source, and lots of other projects do them differently. For Red Hat, the “Upstream First” mantra means that everyone can contribute, and everyone can get a seat at the table if they want it. I understand this is overly simplistic, but I hope you get the idea.

[2] Thankfully, all the right folks now are talking on twitter about technical differences and explaining feature functionality and architecture. It’s unfortunate that with all of the latest and greatest features of gluster available to Nutanix to review and tinker with but I don’t think they have a “competitive” lab set up with Gluster for testing. (Hey Nutanix, give me a ring I’d love to set up a geo-replicated Gluster cluster for you with zero software costs)

by on February 26, 2013

Converged Infrastructure prototyping with Gluster 3.4 alpha and QEMU 1.4.0

I just wrapped up my presentation at the Gluster Workshop at CERN where I discussed Open Source advantages in tackling converged infrastructure challenges. Here is my slidedeck. Just a quick heads up, there’s some animation that’s lost in the pdf export as well as color commentary during almost every slide.

During the presentation I demo’ed out the new QEMU/GlusterFS native integration leveraging libgfapi. For those of you wondering what that means, in short, there’s no need for FUSE anymore and QEMU leverages GlusterFS natively on the back end. Awesome.

So for my demo I needed two boxes running QEMU/KVM/GlusterFS. This would provide the compute and storage hypervisor layers. As I only have a single laptop to tour Europe with, I obviously needed a nested KVM environment.

If you’re got enough hardware feel free to skip the Enable Nested virtualization section and skip ahead to the Base OS installation.

This wasn’t as easy envionment to get up and running, this is alpha code boys and girls so expect to roll your sleeves up. Ok with that out of the way, I’d like to walk through the steps I did in order to get my demo envionment up and running. These installation assumes you have Fedora 18 installed and updated with virt-manager and KVM installed.

Enable Nested Virtualization

Since we’re going to want to install an OS on our VM running on our Gluster/QEMU cluster that we’re building, we’ll need to enable Nested Virtualization.

Let’s first check and see if nested virtualization is enabled. If it responds with an N then No. If yes, skip this section to the install.

$ cat /sys/module/kvm_intel/parameters/nested
N

If it’s not we’ll need to load a KVM specific module with the nested option loaded. The easist way to change this is using the modprobe configuration files:

$ echo “options kvm-intel nested=1″ | sudo tee /etc/modprobe.d/kvm-intel.conf

Reboot your machine once the changes have been made and check again to see if the feature is enabled:

$ cat /sys/module/kvm_intel/parameters/nested
Y

That’s it we’re done with prepping the host.

Install VMs OS

Starting with my base Fedora laptop, I’ve installed virt-manager for VM management. I wanted to use Boxes, but it’s not designed for this type of configuration. So. Create your new VM, I selected the “Fedora http install” as I didn’t have an iso laying around. Also http install=awesome.
gluster01

To do this, select the http install option and enter the nearest available location.

gluster02

For me this was the Masaryk University, Brno (where I happened to be sitting during Dev Days 2013)

http://ftp.fi.muni.cz/pub/linux/fedora/linux/releases/18/Fedora/x86_64/os/


I went with an 8 gig base disk to start (we’ll add another one in a bit), gave the VM 1G of ram and a default vCPU. Start the VM build and install.

gluster03

The install will take a bit longer as it’s downloading the install files during the intial boot.

gluster04

Select the language you want to use and continue to the installation summary screen. Here we’ll want to change the software selection option.

gluster05

and select the minimal install:

gluster06

during the installation, go ahead and set the root password:

gluster07

Once the installation is complete, the VM will reboot. Once done, power it down. Although we’ve enable Nested Virtualization, we need to pass the CPU flags onto the VM.

In the virt-manager window right click on the VM, and select open. In the VM window, select view > details. Rather than guessing the cpu architecture, select the copy from host and select Ok.

gluster08

While you’re here go ahead and add an additional 20 gig virtual drive. Make sure you select virtio for the drive type!

gluster09

Boot your VM up and let’s get started.

Base installation components

You’ll need to install some base components before you get started installing GlusterFS or QEMU.

After logging in as root,

yum update


yum install nettools wget xfsprogs binutils

Now we’re going to create the mount point and format the additional drive we just installed.

mkdir -p /export/brick1


mkfs.xfs -i size=512 /dev/vdb

We’ll need to edit our fstab and add this as well, so that it will remain persistent going forward after any reboots.

add the following line to /etc/fstab

/dev/vdb /export/brick1 xfs defaults 1 2

Once you’re done with this, let’s go ahead and mount the drive.

mount -a && mount

Firewalls. YMMV

it may be just me (I’m sure it is) but I struggled getting gluster to work with firewalld on fedora 18. This is not reccomeneded in production envionments, but for our all in VM on a laptop deployment, I just disabled and removed firewalld.

yum remove firewalld

Gluster 3.4.0 Alpha Installation

First thing we’ll need to do on our VM is configure and enable the gluster repo.

wget http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0alpha/Fedora/glusterfs-alpha-fedora.repo

and move it to /etc/yum.repos.d/

mv glusterfs-alpha-fedora.repo /etc/yum.repos.d/

Now we enable the repo and install glusterfs:

yum update

yum install glusterfs-server glusterfs-devel

Important to note here we need the gluster-devel package for the QEMU integration we’ll be testing. Once done we’ll start the glusterd service and verify that it’s working.

break break 2nd VM

Ok folks, if you’ve made it here, get a coffee and do the install again on a 2nd VM. You’ll need the 2nd replication VM target before you proceed.

break break Network Prepping both VMs

As we’re on the private nat’d network on our laptop that virt-manager is managing, we’ll need to update our VMs we create and assign static addresses, as well as editing the /etc/hosts file to add both servers with thier addresses. We’re not proud here people, this is a test envionment, if you want to use proper DNS, I won’t judge if you don’t.

1) change both VMs to using static addresses in the nat range.
2) change VMs hostnames
3) update both VMs /etc/hosts to include both nodes. This is hacky but expedient.

back to Gluster

start and verify the gluster services on both VMs.
service glusterd start
service glusterd status

On either host, we’ll need to create the gluster volume and set it for replication.
gluster volume create vmstor replica 2 ci01.local:/export/brick1 ci02.local:/export/brick1

Now we’ll start the volume we just created
gluster volume start vmstor

Verify that everything is good, if this returns fine, you’re up and running with GlusterFS!
gluster volume info

building QEMU dependancies

let’s get some prereqs for getting the latest qemu up and running

yum install lvm2-devel git gcc-c++ make glib2-devel pixman-devel

Now we’ll download QEMU:

git clone git://git.qemu-project.org/qemu.git

The rest is pretty standard compiling from source. you’ll start with configuring your build. I’ll trim the target list to save time as I know I’m not going to use many of the QEMU supported architectures.
./configure --enable-glusterfs --target-list=i386-softmmu,x86_64-softmmu,x86_64-linux-user,i386-linux-user

With that done everything on this host is done, and we’re ready to start building VMs using GlusterFS natively bypassing fuse and leveraging thin provisioning. W00!

Creating Virtual Disks on GlusterFS

qemu-img create gluster://ci01:0/vmstor/test01?transport=socket 5G

Breaking this down, we’re using qemu-img to create a disk natively on GlusterFS that’s five gigs in size. I’m looking for some more information about what the transport socket is, expect an answer soonish.

Build a VM and install an OS onto the GlusterFS mounted disk image

At this point you’ll want something to actually install on your image. I went with TinyCore because as it is I’m already pushing up against the limitations of this laptop with nested virtualization. You can download TinyCore Linux here.

qemu-system-x86_64 --enable-kvm -m 1024 -smp 4 -drive file=gluster://ci01/vmstor/test01,if=virtio -vnc 192.168.122.209:1 --cdrom /home/theron/CorePlus-current.iso

This is the quickest way to get this moving, I skipped using Virsh for the demo, and am assigning the VNC IP and port manually. Once the VM starts up you should be able to connect to the VM from your external host and start the install process.

gluster10

To get the install going, select the harddrive that was build with qemu-img and follow the OS install procedures.

finish

At this point you’re done and you can start testing and submitting bugs!

I’d expect to see some interesting things with OpenStack in this space as well as tighter oVirt integration moving forward.

Let me know what you think about this guide and if it was useful.

side note

Also, something completely related. I’m pleased to announce that I’ve joined the Open Source and Standards team at Redhat working to promote and assist making upstream projects wildly successful. If you’re unsure what that means or you’re wondering why Red Hat cares about upstream projects, PLEASE reach out and say hello.

References:

nested KVM
KVM VNC
Using QEMU to boot VM on GlusterFS
QEMU downloads
QEMU Gluster native file system integration

by on January 31, 2013

Is converged infrastructure a crutch?

This started as a response to a twitter conversation with @DuncanYB and @joshobrien77  re: converged infrastructure. Duncan recently posted a great blog post about Converged compute and storage. Go read that first. I’ll wait here.

Welcome back! Ok, to start I agree with Duncan’s comments that Nutanix is certainly in the leader group for what’s viewed today as “Converged Infrastructure”, in that it’s delivering a whole stack solution. The other company mentioned in Duncan’s blogpost is Simplivity. Both companies are doing awesome stuff, and have figured out ways to solve REALLY complex problems.

Taking a step back and thinking about what they’re both delivering, both companies have a hardware solution that bundles virtualized network, compute and storage capabilities. Fundamentally, they’re both delivering x86 platforms with this new methodology for what I’m calling “complete virtualization*“. Converged Infrastructure today implying a specific hardware vendor component to deliver.

If we look at what problems virtualization solved in the enterprise, separating the compute from the hardware really pushed forward in an evolutionary way. With old school virtualization (vSphere 101, compute only) old towers were brought down and in some cases, new towers were built up. Storage and Network hardware and designs remained relatively unaffected, but certainly leveraged in new ways.

Inserting a shim between the Storage hardware and virtualizing storage is moving forward quickly. This isn’t as simple as virtualizing traditional storage architectures. While you can throw traditional storage in a VM and leverage it (think a virtual EMC, Netapp, or Nexenta virtual appliances, this doesn’t give a full featured CI answer to storage. This is a key component in CI solutions today providing stability, replication and scalability needed to host enterprise applications. There are other places outside of CI where this is happening in IT today. Looking at Gluster, Ceph, or even Isilon, these solutions scale out storage effectively. If these can be instantiated virtually it allows for the colllapse onto a single compute, storage, and network “node”.  I’d say this allows for an existing precedent for these types of potential “CI” style storage solutions, and assume they’ll continue to gain traction.

While it’s all x86 commodity hardware with a shiny nameplate, there’s a final ingredient in the secret sauce. Automation. This is critical for scaling out a complete solution. I see these challenges day in and day out already with customers in the field. Puppet, Chef, Ansible and others are already starting to solve these challenges in the enterprise. I don’t think anyone sees this going away.

“Complete virtualization” boils down to a single software stack delivering this solution regardless of the box or unit it’s delivered on. I’d say that Nutanix delivers this. Does this mean that I have to buy my solution from Nutanix? In what is called “converged infrastructure” I do. What added value does Nutanix hardware provide vs commodity hardware? (Support would be my first thought, known hardware configs etc… more on this later) The flipside would be, does this type of solution lock us into a familiar hardware paradigm that VMware and virtualization in general worked to free us from?

Having said all of that, I can’t see a reason that this can’t be bundled into a software package and still allow for full functionality. Tying hardware support and software support is a step backwards for where we’re going. It’s going to work for some customers, but I believe theres a risk for artificial value in hardware that is completely commoditized today. The value is in the software, and I’d assume it will be the continued path forward. VMware, Redhat, and even MS really nailed that point home.

As openstack continues to make inroads, I would hope to see full “complete virtualization” product(s) coming out soonish.  There’s certainly CI solutions in the works, nebula jumps to mind.

Trying to summarize before I ramble off, my initial thought is that Enterprise IT will still want choice for hardware.  Nutanix is delivering an AMAZING solution because the different software solutions aren’t tied together natively to make this an easy problem to solve. Trying to force a bundled solution of hardware and software to just deliver functionality that leverages commodity x86 hardware seems awkward.  Additionally, requiring new support chains within larger Enterprises can create more challenges can make adaption harder than it needs to be. I expect to see open source solutions (from Redhat, VMware or others) to deliver these types of solutions faster and gain additional traction in this space to server as an infrastructure foundation for Open/Cloudstack style products.  I think CI will fade away to “complete virtualization” software solutions.

All in all, I think this is a product wrapping around the idea of “software defined datacenters”.

* Complete virtualization is a horrible name for this. Ideas welcome.