all posts tagged ovirt


by on July 30, 2014

Web Interface to manage Gluster Nodes

Ovirt is an open source tool used to create/manage gluster nodes through an easy to use web interface.

This document is to cover how gluster can be used with ovirt.

Want to manage gluster nodes with ease using ovirt ?  Create your own ovirt by following these simple steps.

Machine Requirements :

  • Fedora19  with 4GB of memory(minimum) and 20GB of hard disk space.
  • Recomended Browsers:
  • Mozilla Firefox 17
  • IE9  and above for the web admin portal.

Installation steps:

  •  Download and install fedora19 ISO.
  • Add the official ovirt repository for fedora yum localinstall http://resources.ovirt.org/releases/ovirt-release.noarch.rpm” 
  •  Install ovirt-engine by running the command  “yum install -y ovirt-engine.”
  • Once the installation is completed, run the command to set up ovirt with gluster, “engine-setup”

Once you run the above command, user will be prompted with the below questions . Provide the answer as follow.

The installer will take you through a series of interactive questions as listed in the following example. If you do not enter a value when prompted, the installer uses the default settings which are stated in [ ] brackets.

The default ports 80 and 443 must be available to access the manager on HTTP and HTTPS respectively.

[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20140411224650.log
Version: otopi-1.2.0 (otopi-1.2.0-1.fc19)
[ INFO ] Stage: Environment packages setup
[ INFO ] Yum Status: Downloading Packages
[ INFO ] Yum Download/Verify: iproute-3.12.0-2.fc19.x86_64
[ INFO ] Yum Status: Check Package Signatures
[ INFO ] Yum Status: Running Test Transaction
[ INFO ] Yum Status: Running Transaction
[ INFO ] Yum update: 1/2: iproute-3.12.0-2.fc19.x86_64
[ INFO ] Yum updated: 2/2: iproute
[ INFO ] Yum Verify: 1/2: iproute.x86_64 0:3.12.0-2.fc19 – u
[ INFO ] Yum Verify: 2/2: iproute.x86_64 0:3.9.0-1.fc19 – ud
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization

–== PRODUCT OPTIONS ==–

–== PACKAGES ==–

[ INFO ] Checking for product updates…
[ INFO ] No product updates found

–== NETWORK CONFIGURATION ==–

Host fully qualified DNS name of this server [localhost.localdomain]:  [ Give the FQDN  / locally resolvable host]

If user does not provide a FQDN, setup will result in the following warning.

[WARNING] Failed to resolve localhost.localdomain using DNS, it can be resolved only locally
Setup can automatically configure the firewall on this system.
Note: automatic configuration of the firewall may overwrite current settings.

Do you want Setup to configure the firewall? (Yes, No) [Yes] : Yes

[ INFO ] firewalld will be configured as firewall manager.

–== DATABASE CONFIGURATION ==–

Where is the Engine database located? (Local, Remote) [Local]: Local

Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql and create Engine database, or prefer to perform that manually? (Automatic, Manual) [Automatic]: Automatic

–== OVIRT ENGINE CONFIGURATION ==–

Application mode (Both, Virt, Gluster) [Both]: Gluster (The input being provided here is Gluster, as we are interested only to monitor gluster nodes).

Engine admin password:  [provide a password, which would be used to login]
Confirm engine admin password: [confirm the password]

If the password provided is too short, setup results in an warning as below.

[WARNING] Password is weak: it is based on a dictionary word
Use weak password? (Yes, No) [No]: [provide yes, if you want to use the short password]

–== PKI CONFIGURATION ==–

Organization name for certificate [localdomain]: ABCD

–== APACHE CONFIGURATION ==–

Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]: [Automatic]

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]: Yes

–== SYSTEM CONFIGURATION ==–

Configure WebSocket Proxy on this machine? (Yes, No) [Yes]: [No]

[ INFO ] NFS configuration skipped with application mode Gluster

–== MISC CONFIGURATION ==–

–== END OF CONFIGURATION ==–

[ INFO ] Stage: Setup validation
[WARNING] Cannot validate host name settings, reason: resolved host does not match any of the local addresses
[WARNING] Less than 16384MB of memory is available

If system has less than 16B of memory , setup would display above warning. (since this is the maximum recomended)

–== CONFIGURATION PREVIEW ==–

Engine database name : engine
Engine database secured connection : False
Engine database host : localhost
Engine database user name : engine
Engine database host name validation : False
Engine database port : 5432
PKI organization : ABCD
Application mode : gluster
Firewall manager : firewalld
Update Firewall : True
Configure WebSocket Proxy : True
Host FQDN : localhost.localdomain
Configure local Engine database : True
Set application as default page : True
Configure Apache SSL : True

Please confirm installation settings (OK, Cancel) [OK]:

The installation commences. The following message displays, indicating that the installation was successful.

[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Creating PostgreSQL ‘engine’ database
[ INFO ] Configuring PostgreSQL
[ INFO ] Creating Engine database schema
[ INFO ] Creating CA
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Generating post install configuration file ‘/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf’
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up

–== SUMMARY ==–

SSH fingerprint: <SSH_FINGERPRINT>
Internal CA: <CA_FINGERPRINT>
Web access is enabled at:
http://example.ovirt.org:80/ovirt-engine
https://example.ovirt.org:443/ovirt-engine
Please use the user “admin” and password specified in order to login into oVirt Engine

–== END OF SUMMARY ==–

[ INFO ] Starting engine service
[ INFO ] Restarting httpd
[ INFO ] Restarting nfs services
[ INFO ] Generating answer file ‘/var/lib/ovirt-engine/setup/answers/20140310163837-setup.conf’
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140310163604.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of setup completed successfully

`Installation completed successfully`

Great!! you are almost there.

Now browse through the following URL “https://<ip>/ovirt-engine&#8221; , provide the user name as admin and password is the one which you gave in the setup.

Add your gluster nodes to the console and enjoy features like adding new/ improting existing cluster, creating/deleting volumes, adding/deleting bricks, set/reset volume options, optimize volume for virt store, Rebalance , remove brick features.

Another fantastic way to manage your Gluster nodes through UI

Not interested to peform all the above steps and just wanna do the actions mentioned above, do the following steps?

Is it possible? Yes, why not? Go through the steps below.

1) Install docker on your machine by performing the step “yum install -y docker”

2) Now start running docker by running the command “systemctl start docker”

3) Search for the image by running the command “docker search kasturidocker/centos_ovirt_3.5″

4) Just login to the above linux container by running the command “docker run -i -t  kasturidocker/centos_ovirt_3.5 /bin/bash”

Done, that is it.

5) Check if ovirt-engine is running , by running the command “service ovirt-engine status”, if not start it.

6) Get ip of the system and browse through the URL “http://<ip>/ovirt-enigine&#8221;.

Your web console is ready just in 6 steps, start adding gluster nodes and manage it.

 


by on October 17, 2013

Red Hat Related Talks at LinuxCon + CloudOpen Europe

LinuxCon and CloudOpen Europe are just a few days away, and the line-up for talks looks really good. If you’re putting together your schedule, we have a couple of suggestions for talks that you’d probably find interesting.

The full schedule is on Sched.org, which makes it really easy to keep track of the talks you don’t want to miss. Also, don’t miss the Gluster Workshop on Thursday.

Monday, October 21

Tuesday, October 22

Wednesday, October 23

by on September 16, 2013

oVirt 3.3, Glusterized

The All-in-One install I detailed in Up and Running with oVirt 3.3 includes everything you need to run virtual machines and get a feel for what oVirt can do, but the downside of the local storage domain type is that it limits you to that single All in One (AIO) node.

You can shift your AIO install to a shared storage configuration to invite additional nodes to the party, and oVirt has supported the usual shared storage suspects such as NFS and iSCSI since the beginning.

New in oVirt 3.3, however, is a storage domain type for GlusterFS that takes advantage of Gluster’s new libgfapi feature to boost performance compared to FUSE or NFS-based methods of accessing Gluster storage with oVirt.

With a GlusterFS data center in oVirt, you can distribute your storage resources right alongside your compute resources. As a new feature, GlusterFS domain support is rougher around the edges than more established parts of oVirt, but once you get it up and running, it’s worth the trouble.

In oVirt, each host can be part of only one data center at a time. Before we decommission our local storage domain, we have to shut down any VMs running on our host, and, we’re interested in moving them to our new Gluster storage domain, we need to ferry those machines over to our export domain.

GlusterFS Domain & RHEL/CentOS:

The new, libgfapi-based GlusterFS storage type has a couple of software prerequisites that aren’t currently available for RHEL/CentOS — the feature requires qemu 1.3 or better and libvirt 1.0.1 or better. Earlier versions of those components don’t know about the GlusterFS block device support, so while you’ll be able to configure a GlusterFS domain on one of those distros today, any attempts to launch VMs will fail.

Versions of qemu and libvirt with the needed functionality backported are in the works, and should be available soon, but for now, you’ll need Fedora 19 to use the GlusterFS domain type. For RHEL or CentOS hosts, you can still use Gluster-based storage, but you’ll need to do so with the POSIXFS storage type.

The setup procedures are very similar, so I’ll include the POSIXFS instructions below as well in case you want to pursue that route in the meantime. Once the updated packages become available, I’ll modify this howto accordingly.

SELinux, Permissive

Currently, the GlusterFS storage scenario described in this howto requires that SELinux be put in permissive mode. You can put selinux in permissive mode with the command:

sudo setenforce 0

To make the shift to permissive mode persist between reboots, edit “/etc/sysconfig/selinux” and change SELINUX=enforcing to SELINUX=permissive.

Glusterizing Your AIO Install in n Easy Steps

  1. Evacuate Your VMs

    Visit the “Virtual Machines” tab in your Administrator Portal, shut down any running VMs, and click “Export,” “OK” to copy them over to your export domain.

    While an export is in progress, there’ll be an hourglass icon next to your VM name. Once any VMs you wish to save have moved over, you can reclaim some space by right-clicking the VMs and hitting “Remove,” and then “OK.”

  2. Detach Your Domains

    Next, detach your ISO_DOMAIN from the local storage data center by visiting the “Storage” tab, clicking on the ISO_DOMAIN, visiting the “Data Center” tab in the bottom pane, clicking “local_datacenter,” then “Maintenance,” then “Detach,” and “OK” in the following dialog. Follow these same steps to detach your EXPORT_DOMAIN as well.

  3. Modify Your Data Center, Cluster & Host

    Now, click the “Data Centers” tab, select the “Default” data center, and click “Edit.” In the resulting dialog box, choose “GlusterFS” in the “Type” drop down menu and click “OK.”

    If you’re using RHEL/CentOS and taking the Gluster via POSIXFS storage route I referenced above, choose “POSIXFS” in the “Type” drop down menu instead.

    Next, click the “Clusters” tab, select the “Default” cluster, and click “Edit.” In the resulting dialog box, click the check box next to “Enable Gluster Service” and click “OK.”

    Then, visit the “Hosts” tab, select your “local_host” host, and click “Maintenance.” When the host is in maintenance mode, click “Edit,” select “Default” from the “Data Center” drop down menu, hit “OK,” and then “OK” again in the following dialog.

  4. Next, hit the command-line for a few tweaks that ought to be handled automatically, but aren’t (yet).

    Install the vdsm-gluster package, start gluster, and restart vdsm:

    sudo yum install vdsm-gluster

    Now, edit the file “/etc/glusterfs/glusterd.vol” [bz#] to add “option rpc-auth-allow-insecure on” to the list of options under “volume management.”

    As part of the virt store optimizations that oVirt applies to Gluster volumes, there’s a Gluster virt group in which oVirt places optimized volumes. The file that describes this group isn’t currently provided in a package, so we have to fetch it from Gluster’s source repository:

    sudo curl https://raw.github.com/gluster/glusterfs/master/extras/group-virt.example -o /var/lib/glusterd/groups/virt [bz#]

    Now, we’ll start the Gluster service and restart the vdsm service:

    sudo service glusterd start
    sudo service vdsmd restart
  5. Next, we’ll create a mount point for our Gluster brick and set its permissions appropriately. To keep this howto short, I’m going to use a regular directory on our test machine’s file system for the Gluster brick. In a production setup, you’d want your Gluster brick to live on a separate XFS partition.
    sudo mkdir /var/lib/exports/data
    chown 36:36 /var/lib/exports/data [bz#]
  6. Now, we’re ready to re-activate our host, and use it to create the Gluster volume we’ll be using for VM storage. Return to the Administrator Portal, visit the “Hosts” tab, and click “Activate.”

    Then, visit the “Volumes” tab, click “Create Volume,” and give your new volume a name. I’m calling mine “data.” Check the “Optimize for Virt Store” check box, and click the “Add Bricks” button.

    In the resulting dialog box, populate “Brick Directory” with the path we created earlier, “/var/lib/exports/data” and click “Add” to add it to the bricks list. Then, click “OK” to exit the dialog, and “OK” again to return to the “Volumes” tab.

  7. Before we start up our new volume, we need to head back to the command line to apply the “server.allow-insecure” option we added earlier to our volume:
    sudo gluster volume set data server.allow-insecure on
  8. Now, back to the Administrator Portal to start our volume and create a new data domain. Visit the “Volumes” tab, select your newly-created volume, and click “Start.”

    Then, visit the “Storage” tab, hit “New Domain,” give your domain a name, and populate the “Path” field with your machine’s hostname colon volume name:

    mylittlepony.lab:data

    If you’re using RHEL/CentOS and taking the Gluster via POSIXFS storage route I referenced above, you need to populate the “Path” field with with your machine’s hostname colon slash volume name instead. Again, this is only if you’re taking the POSIXFS route. With the GlusterFS storage type, that pesky slash [BZwon’t prevent the domain from being created, but it’ll cause VM startup mysteriously to fail! Also, in the “VFS Type” field, you’ll need to enter “glusterfs”

    Click “OK” and wait a few moments for the new storage domain to initialize. Next, click on your detached export domain, choose the “Data Center” tab in the bottom pane, click “Attach,” select “Default” data center, and click “OK.” Perform the same steps with your iso domain.

  9. All Right. You’re back up and running, this time with a GlusterFS Storage Domain. If you ferried any of the VMs you created on the original local storage domain out to your export domain, you can now ferry them back:

    Visit the “Storage” tab, select your export domain, click “VM Import” in the lower pane, select the VM you wish to import, and click “Import.” Click “OK” on the dialog that appears next. If you didn’t remove the VM you’re importing from your local storage domain earlier, you may have to “Import as cloned.”

Next Steps

From here, you can experiment with different types of Gluster volumes for your data domains. For instance, if, after adding a second host to your data center, you want to replicate storage between the two hosts, you’d create a storage brick on both of your hosts, choose the replicated volume type when creating your Gluster volume, create a data domain backed by that volume, and start storing your VMs there.

You can also disable the NFS ISO and Export shares hosted from your AIO machine and re-create them on new Gluster volumes, accessed via Gluster’s built-in NFS server. If you do, make sure to disable your system’s own NFS service, as kernel NFS and Gluster NFS conflict with each other.

by on

oVirt 3.3 Spices Up the Software Defined Datacenter with OpenStack and Gluster Integration

The oVirt 3.3 release may not quite let you manage all the things in the data center, but it’s getting awfully close. Just shy of six months after the oVirt 3.2 release, the team has delivered an update with groundbreaking integration with OpenStack components, GlusterFS, and a number of ways to custom tailor oVirt to your data center’s needs.

What is oVirt?

oVirt is an entirely open source approach to the software defined datacenter. oVirt builds on the industry-standard open source hypervisor, KVM, and delivers a platform that can scale from one system to hundreds of nodes running thousands of instances.

The oVirt project comprises two main components:

  • oVirt Node: A minimal Linux install that includes the KVM hypervisor and is tuned for running massive workloads.
  • oVirt Engine: A full-featured, centralized management portal for managing oVirt Nodes. oVirt Engine gives admins, developers, and users the tools needed to orchestrate their virtual machines across many oVirt Nodes.

See the oVirt Feature Guide for a comprehensive list of oVirt’s features.

What’s New in 3.3?

In just under six months of development, the oVirt team has made some impressive improvements and additions to the platform.

Integration with OpenStack Components

Evaluating or deploying OpenStack in your datacenter? The oVirt team has added integration with Glance and Neutron in 3.3 to enable sharing components between oVirt and OpenStack.

By integrating with Glance, OpenStack’s service for managing disk and server images and snapshots, you’ll be able to leverage your KVM-based disk images between oVirt and OpenStack.

OpenStack Neutron integration allows oVirt to use Neutron as an external network provider. This means you can tap Neutron from oVirt to provide networking capabilities (such as network discovery, provisioning, security groups, etc.) for your oVirt-managed VMs.

oVirt 3.3 also provides integration with Cloud-Init, so oVirt can simplify provisioning of virtual machines with SSH keys, user data, timezone information, and much more.

Gluster Improvements

With the 3.3 release, oVirt gains support for using GlusterFS as a storage domain. This means oVirt can take full advantage of Gluster’s integration with Qemu, providing a performance boost over the previous method of using Gluster’s POSIX exports. Using the native QEMU-GlusterFS integration allows oVirt to bypass the FUSE overhead and access images stored in Gluster as a network block device.

The latest oVirt release also allows admins to use oVirt to manage their Gluster clusters, and oVirt will recognize changes made via Gluster’s command line tools. In short, oVirt has gained tight integration with network-distributed storage, and Gluster users have easy management of their domains with a simple user interface.

Extending oVirt

Out of the proverbial box, oVirt is already a fantastic platform for managing your virtualized data center. However, oVirt can be extended to fit your computing needs precisely.

  • External Tasks give external applications the ability to inject tasks to the oVirt engine via a REST API, and track changes in the oVirt UI.
  • Custom Device Properties allow you to specify custom properties for virtual devices, such as vNICs, when devices may need non-standard settings.
  • Java-SDK is a full SDK for interacting with the oVirt API from external applications.

Getting oVirt 3.3

Ready to take oVirt for a test drive? Head over to the oVirt download page and check out Jason Brooks’ Getting Started with oVirt 3.3 Guide. Have questions? You can find us on IRC or subscribe to the users mailing list to get help from others using oVirt.

by on September 11, 2013

Up and Running with oVirt 3.3

The oVirt Project is now putting the finishing touches on version 3.3 of its KVM-based virtualization management platform. The release will be feature-packed, including expanded support for Gluster storage, new integration points for OpenStack’s Neutron networking and Glance image services, and a raft of new extensibility and usability upgrades.

oVirt 3.3 also sports an overhauled All-in-One (AIO) setup plugin, which makes it easy to get up and running with oVirt on a single machine to see what oVirt can do for you.

Prerequisites

  • Hardware: You’ll need a machine with at least 4GB RAM and processors with hardware virtualization extensions. A physical machine is best, but you can test oVirt effectively using nested KVM as well.
  • Software: oVirt 3.3 runs on the 64-bit editions of Fedora 19 or Red Hat Enterprise Linux 6.4 (or on the equivalent version of one of the RHEL-based Linux distributions such as CentOS or Scientific Linux).
  • Network: Your test machine’s domain name must resolve properly, either through your network’s DNS, or through the /etc/hosts files of your test machine itself and through those of whatever other nodes or clients you intend to use in your installation.On Fedora 19 machines with a static IP address (dhcp configurations appear not to be affected), you must disable NetworkManager for the AIO installer to run properly [BZ]:
    $> sudo systemctl stop NetworkManager.service
    $> sudo systemctl mask NetworkManager.service
    $> sudo service network start
    $> sudo chkconfig network on

    Also, check the configuration file for your interface (for instance, /etc/sysconfig/network-scripts/ifcfg-eth0) and remove the trailing zero from “GATEWAY0″ “IPADDR0″ and “NETMASK0″ as this syntax appears only to work while NetworkManager is enabled. [BZ]

  • All parts of oVirt should operate with SELinux in enforcing mode, but SELinux bugs do surface. At the time that I’m writing this, the Glusterization portion of this howto requires that SELinux be put in permissive mode. Also, the All in One install on CentOS needs SELinux to be in permissive mode to complete.You can put selinux in permissive mode with the command:
    sudo setenforce 0

    To make the shift to permissive mode persist between reboots, edit “/etc/sysconfig/selinux” and change SELINUX=enforcing to SELINUX=permissive.


Install & Configure oVirt All in One

  1. Run one of the following commands to install the oVirt repository on your test machine.
    1. For Fedora 19:
      $> sudo yum localinstall http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm -y
    2. For RHEL/CentOS 6.4 (also requires EPEL):
       $> sudo yum localinstall http://resources.ovirt.org/releases/ovirt-release-el6-8-1.noarch.rpm -y
      sudo yum localinstall http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y
  2. Next, install the oVirt All-in-One setup plugin:
    $> sudo yum install ovirt-engine-setup-plugin-allinone -y
  3. Run the engine-setup installer. When asked whether to configure VDSM on the host, answer yes. You should be fine accepting the other default values.
    $> sudo engine-setup
    engine-setup33.png

    Once the engine-setup script completes, you’ll have a working management server that doubles as a virtualization host. The script sets up a local storage domain for hosting VM images, and an iso domain for storing iso images for installing operating systems on the VMs you create.

  4. Before we leave the command line and fire up the oVirt Administrator Portal, we’re going to create one more storage domain: an export domain, which oVirt uses for ferrying VM images and templates between data centers.We can do this by creating the export domain mount point, setting the permissions properly, copying and tweaking the configuration files that engine-setup created for the iso domain, and reloading nfs-server:
    $> sudo mkdir /var/lib/exports/export
    $> sudo chown 36:36 /var/lib/exports/export
    1. For Fedora:
      $> sudo cp /etc/exports.d/ovirt-engine-iso-domain.exports /etc/exports.d/ovirt-engine-export-domain.exports

      In ovirt-engine-export-domain.exports Change “iso” to “export”

      $> sudo vi /etc/exports.d/ovirt-engine-export-domain.exports
      $> sudo service nfs-server reload
    2. For RHEL/CentOS:
      $> sudo vi /etc/exports

      In /etc/exports append the line:

      /var/lib/exports/export    0.0.0.0/0.0.0.0(rw)
      $> sudo service nfs reload
  5. Now, fire up your Web browser, visit the address your oVirt engine machine, and click the “Administrator Portal” link. Log in with the user name “admin” and the password you entered during engine-setup.
    admin-portal-login33.png
    admin-portal-login-a33.png

    Once logged into the Administrator Portal, click the “Storage” tab, select your ISO_DOMAIN, and visit the the “Data Center” tab in the bottom half of the screen. Next, click the “Attach” link, check the check box next to “local_datacenter,” and hit “OK.” This will attach the storage domain that houses your ISO images to your local datacenter.

    storage-tab33.png
    attach-iso33.png

    Next, we’ll create and activate our export domain. From the “Storage” tab, click “New Domain,” give the export domain a name (I’m using EXPORT_DOMAIN), choose “local_datacenter” in Data Center drop down menu, choose “Export / NFS” from “Domain Function / Storage Type” drop down menu, enter your oVirt machine IP / FQDN :/var/lib/exports/export in the Export Path, and click OK.

    new-export-domain33.png
  6. Before we create a VM, let’s head back to the command line and upload an iso image that we can use to install an OS on the VM we create.Download an iso image:
    $> curl -O http://mirrors.kernel.org/fedora/releases/19/Fedora/x86_64/iso/Fedora-19-x86_64-netinst.iso

    Upload the image into your iso domain (the password is the same as for the Administrator Portal):

    $> engine-iso-uploader upload -i ISO_DOMAIN Fedora-19-x86_64-netinst.iso
  7. Now we’re ready to create and run a VM. Head back to the oVirt Administrator Portal, visit the “Virtual Machines” tab, and click “New VM.” In the resulting dialog box, give your new instance a name and click “OK.”
    new-VM33.png

    In the “New Virtual Machine – Guide Me” dialog that pops up next, click “Configure Virtual Disks,” enter a disk size, and click “OK.” Hit “Configure Later” to dismiss the Guide Me dialog.

    add-disk33.png

    Next, select your newly-created VM, and click “Run Once.” In the dialog box that appears, expand “Boot Options,” check the “Attach CD” check box, choose your install iso from the drop down, and hit “OK” to proceed.

    run-once33.png

    After a few moments, the status of your new vm will switch from red to green, and you can click on the green monitor icon next to “Migrate” to open a console window.

    run-VM33.png

    oVirt defaults to the SPICE protocol for new VMs, which means you’ll need the virt-viewer package installed on your client machine. If a SPICE client isn’t available to you, you can opt for VNC by stopping your VM, clicking “Edit,” “Console,” “Show Advanced Options,” and choosing VNC from the “Protocol” drop down menu.

That’s enough for this blog post, but stay tuned for more oVirt 3.3 how-to posts. In particular, I have walkthroughs in the works for making use of oVirt’s new and improved Gluster storage support, and for making oVirt and OpenStack play nicely together.

If you’re interested in getting involved with the project, you can find all the mailing list, issue tracker, source repository, and wiki information you need here.

On IRC, I’m jbrooks, ping me in the #ovirt room on OFTC or write a comment below and I’ll be happy to help you get up and running or get pointed in the right direction.

Finally, be sure to follow us on Twitter at @redhatopen for news on oVirt and other open source projects in the Red Hat world.

by on June 19, 2013

Event Recap: oVirt Shanghai Workshop

Last month, over 80 users and developers gathered at Intel’s Shanghai China Campus for a two-day workshop centered on oVirt, the Open Virtualization management platform.

Jackson He, General Manager of Intel Asia and Pacific R&D Ltd. and Intel Software and Services Group PRC, provided the opening keynote, in which he spoke to a mostly local audience about Intel’s growth in China and continued commitment to open source software including such projects as oVirt, OpenStack, KVM and Hadoop. Intel’s continued commitment to Open Source virtualization was further demonstrated throughout the Workshop with great presentations by Gang Wei and Dongxiao Xu.

With three tracks spread across two days, this was the first workshop that also featured a day long Gluster Operations Track. This track, lead by John Mark Walker, Community Lead for Gluster, allowed for not only introductions and examples of leveraging GlusterFS storage solutions with oVirt, but also more advanced discussions, including a talk on developing with libgfapi, the GlusterFS translator framework, presented by Vijay Bellur, a Senior Principal Software Engineer at Red Hat.

In conjunction with the Gluster track on the first day of the workshop was the primary oVirt Operations track. With Red Hat presentations ranging from an introduction to oVirt to getting into the weeds of Troubleshooting, oVirt attendees were exposed to all levels of operational use cases and deployment tips. Presentations from IBM engineers Shu Ming and Mark Wu provided solid operational discussions covering oVirt testing in a nested virtualization configuration and outlining IBM’s commitment to and planned development objectives for oVirt.

The second day was all about oVirt developers. Of particular interest to attendees was a presentation by Zhengsheng Zhou of IBM discussing work done to support oVirt on Ubuntu. A key highlight of this event is the continued growth and interest around open virtualization solutions, with oVirt serving a foundational role. The interest in making oVirt available to other platforms is greatly encouraging, and we’re excited to see the community grow to include new platforms.

Also on day two, Doron Fediuck of Red Hat presented on oVirt SLAs, enforced by MoM, the Memory Overcommitment Manager. This presentation also provided a roadmap moving forward on this and other key features. Great discussions on this and most of the presentations allowed for developers to get engaged and focus in on where to help moving forward.

Presentations from the event are now available on the oVirt website.

With over 80 attendees representing Intel, IBM, Red Hat as well as the greater oVirt and Gluster communities, we’re pleased that this workshop was a success.

by on August 31, 2012

oVirt 3.1, Glusterized

One of the cooler new features in oVirt 3.1 is the platform’s support for creating and managing Gluster volumes. oVirt’s web admin console now includes a graphical tool for configuring these volumes, and vdsm, the service for responsible for controlling oVirt’s virtualization nodes, has a new sibling, vdsm-gluster, for handling the back end work.

Gluster and oVirt make a good team — the scale out, open source storage project provides a nice way of weaving the local storage on individual compute nodes into shared storage resources.

To demonstrate the basics of using oVirt’s new Gluster functionality, I’m going to take the all-in-one engine/node oVirt rig that I stepped through recently and convert it from an all-on-one node with local storage, to a multi-node ready configuration with shared storage provided by Gluster volumes that tap the local storage available on each of the nodes. (Thanks to Robert Middleswarth, whose blog posts on oVirt and Gluster I relied on while learning about the combo.)

The all-in-one installer leaves you with a single machine that hosts both the oVirt management server, aka ovirt-engine, and a virtualization node. For storage, the all-in-one setup uses a local directory for the data domain, and an NFS share on the single machine to host an iso domain, where OS install images are stored.

We’ll start the all-in-one to multi-node conversion by putting our local virtualization host, local_host, into maintenance mode by clicking the Hosts tab in the web admin console, clicking the local_host entry, and choosing “Maintenance” from the Hosts navigation bar.

Once local_host is in maintenance mode, we click edit, change to the Default data center and host cluster from the drop down menus in the dialog box, and then hit OK to save the change.

This is assuming that you stuck with NFS as the default storage type while running through the engine-setup script. If not, head over to the Data Centers tab and edit the Default data center to set “NFS” as its type. Next, head to the Clusters tab, edit your Default cluster, fill the check box next to “Enable Gluster Service,” and hit OK to save your changes. Then, go back to the Hosts tab, highlight your host, and click Activate to bring it back from maintenance mode.

Now head to a terminal window on your engine machine. Fedora 17, the OS I’m using for this walkthrough, includes version 3.2 of Gluster. The oVirt/Gluster integration requires Gluster 3.3, so we need to configure a separate repository to get the newer packages:

# cd /etc/yum.repos.d/
# wget http://repos.fedorapeople.org/repos/kkeithle/glusterfs/fedora-glusterfs.repo

Next, install the vdsm-gluster package, restart the vdsm service, and start up the gluster service:

# yum install vdsm-gluster
# service vdsmd restart
# service glusterd start

The all-in-one installer configures an NFS share to host oVirt’s iso domain. We’re going to be exposing our Gluster volume via NFS, and since the kernel NFS server and Gluster’s NFS server don’t play well nicely together, we have to disable the former server.

# systemctl stop nfs-server.service && systemctl disable nfs-server.service

Through much trial and error, I found that it was also necessary to restart the wdmd service:

# systemctl restart wdmd.service

In the move from v3.0 to v3.1, oVirt dropped its NFSv3-only limitation, but that requirement remains for Gluster, so we have to edit /etc/nfsmount.conf and ensure that Defaultvers=3, Nfsvers=3, and Defaultproto=tcp.

Next, edit /etc/sysconfig/iptables to add the firewall rules that Gluster requires. You can paste the rules in just before the reject lines in your config.

# glusterfs
-A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38467 -j ACCEPT

Then restart iptables:

# service iptables restart

Next, decide where you want to store your gluster volumes — I store mine under /data — and create this directory if need be:

# mkdir /data

Now, head back to the oVirt web admin console, visit the Volumes tab, and click Create Volume. Give your new volume a name, and choose a volume type from the drop down menu. For our first volume, let’s choose Distribute, and then click the Add Bricks button. Add a single brick to the new volume by typing the path you desire into the the Brick Directory field, clicking Add, and then OK to save the changes.

Make sure that the box next to NFS is checked under Access Protocols, and then click OK. You should see your new volume listed — highlight it and click Start to start it up. Follow the same steps to create a second volume, which we’ll use for a new ISO domain.

For now, the Gluster volume manager neglects to set brick directory permissions correctly, so after adding bricks on a machine, you have to return to the terminal and run chown -R 36.36 /data (assuming /data is where you are storing your volume bricks) to enable oVirt to write to the volumes.

Once you’ve set your permissions, return to the Storage tab of the web admin console to add data and iso domains at the volumes we’ve created. Click New Domain, choose Default data center from the data center drop down, and Data / NFS from the storage type drop down. Fill the export path field with your engine’s host name and the volume name from the Gluster volume you created for the data domain. For instance: “demo1.localdomain:/data”

Wait for data domain to become active, and repeat the above process for the iso domain. For more information on setting up storage domains in oVirt 3.1, see the quick start guide.

Once the iso domain comes up, BAM, you’re Glusterized. Now, compared to the default all-in-one install, things aren’t too different yet — you have one machine with everything packed into it. The difference is that your oVirt rig is ready to take on new nodes, which will be able to access the NFS-exposed data and iso domains, as well as contribute some of their own local storage into the pool.

To check this out, you’ll need a second test machine, with Fedora 17 installed (though you can recreate all of this on CentOS or another Enterprise Linux starting with the packages here). Take your F17 host (I start with a minimal install), install the oVirt release package, download the same fedora-glusterfs.repo we used above, and make sure your new host is accessible on the network from your engine machine, and vice versa. Also, the bug preventing F17 machines running a 3.5 or higher kernel from attaching to NFS domains isn’t fixed yet, so make sure you’re running a 3.3 or 3.4 version of the kernel.

Head over to the Hosts tab on your web admin console, click New, supply the requested information, and click OK. Your engine will reach out to your new F17 machine, and whip it into a new virtualization host. (For more info on adding hosts, again, see the quick start guide.)

Your new host will require most of the same Glusterizing setup steps that you applied to your engine server: make sure that vdsm-gluster is installed, edit /etc/nfsmount.conf, add the gluster-specific iptables rules and restart iptables, create and chown 36.36 your data directory.

The new host should see your Gluster-backed storage domains, and you should be able to run VMs on both hosts and migrate them back and forth. To take the next step and press local storage on your new node into service, the steps are pretty similar to those we used to create our first Gluster volumes.

First, though, we have to run the command “gluster peer probe NEW_HOST_HOSTNAME” from the engine server to get the engine and it’s new buddy hooked up Glusterwise (this another of the wrinkles I hope to see ironed out soon, taken care automatically in the background).

We can create a new Gluster volume, data1, of the type Replicate. This volume type requires at least two bricks, and we’ll create one in the /data directory of our engine, and one in the /data directory of our node. This works just the same as with the first Gluster volume we set up, just make sure that when adding bricks, you select the correct server in the drop down menu:

Just as before, we have to return to the command line to chown -R 36.36 /data on both of our machines to set the permissions correctly, and start the volumes we’ve created.

On my test setup, I created a second data domain, named data1, stored on the replicated Gluster domain, with the storage path set to localhost:/data1, on the rationale that VM images stored on the data1 domain would stay in sync across the pair of hosts, enabling either of my hosts to tap local storage for running a particular VM image. But I’m a newcomer to Gluster, so consult the documentation for more clueful Gluster guidance.