all posts tagged hadoop


by on August 21, 2014

User Story: Chitika Boosts Big Data with GlusterFS

Chitika Inc., an online advertising network based in Westborough, MA, sought to provide its data scientists with faster and simpler access to its massive store of ad impression data. The company managed to boost availability and broaden access to its data by swapping out HDFS for GlusterFS as the filesystem backend for its Hadoop deployment.

“There are a number of benefits to the utilization of Gluster, arguably the biggest of which is the system’s POSIX compliance. This allows us to very easily mount Gluster anywhere as if it was a disk local to the system, meaning that we can expose 60TB of data to anything in our data center, across any amount of servers, users, and applications.”

Logging Data Part 2: Taming the Storage Beast — Chitika blog

I talked to Chitika Senior Systems Administrator Nick Wood about his company’s GlusterFS deployment, and we discussed the challenges, opportunities, and next steps for GlusterFS in their environment.

Chitika’s GlusterFS storage deployment consists of four GlusterFS 3.5 host running Debian Wheezy. Each host is packed with disks, sporting 43TB of storage across a set of 6 RAID arrays. Each of these twenty-four arrays hosts a single GlusterFS brick, which together form a triple-replicated GlusterFS volume providing roughly 59TB of total storage, 40TB of which is currently consumed.

Chitika’s client machines, which also run Debian Wheezy, primarily access this volume via the GlusterFS FUSE client, although one of their clients makes use of GlusterFS’s NFS support.

Bridging Chikika’s GlusterFS cluster and the company’s cluster of 36 Hadoop nodes is a customized version of the glusterfs-hadoop plugin. On the hardware side, the company taps Infiniband gear to link up its GlusterFS and Hadoop clusters, using an IP over Infiniband connection.

Wood explained that he’s keen to see the RDMA (Remote Direct Memory Access) support in GlusterFS stablize enough for Chitika to shift from TCP to RDMA as their GlusterFS transport type, thereby allowing the company to take full advantage of its Infiniband hardware.

Since their June 2013 deployment, Chitika’s GlusterFS storage solution has undergone a two software upgrades while in production, both of which ran smoothly.

However, the team’s experience with its deployment hasn’t been trouble-free. At one point, the deployment suffered an issue in which problems with the operating system hard drive in one of the GlusterFS hosts that led to inconsistency between some of the replicated data in their volume — a state also known as split-brain.

GlusterFS includes a self-heal daemon for repairing inconsistencies between replicated files, but there are scenarios that require manual intervention to determine which copies to retain and which to discard, which Wood and the Chitika team experienced first-hand.

“The self-heal didn’t really work as we expected. It correctly identified some corrupt files, correctly healed others, and completely ignored most,” Wood explained. “Luckily, we store compressed files and could easily tell what was corrupted. However, it took a 50TB filesystem crawl and semi-automated identification/restoration of good copies from the raw bricks to recover.”

Also among the challenges the Chitika team have encountered has been slow performance with common system utilities that carry out file stat operations, leading the team to develop alternative utilities that avoid the stat system call or that operate in parallel.

Despite these bumps in the road, the team at Chitika is enthusiastic about its GlusterFS deployment, and are mulling plans to double their GlusterFS host count to eight, to accomodate the addition of more compute nodes.

by on February 25, 2014

The Rise of Open Source Analytics Software

I was pleased to read about the progress of Graylog2, ElasticSearch, Kibana, et al. in the past year. Machine data analysis has been a growing area of interest for some time now, as traditional monitoring and systems management tools aren’t capable of keeping up with All of the Things that make up many modern workloads. And then there are the more general purpose, “big data” platforms like Hadoop along with the new in-memory upstarts sprouting up around the BDAS stack. Right now is a great time to be a data analytics person, because there has never in the history of computing been a richer set of open source tools to work with.

There’s a functional difference between what I call data processing platforms, such as Hadoop and BDAS, and data search presentation layers, such as what you find with the ELK stack (ElasticSearch, Logstash and Kibana). While Hadoop, BDAS, et al. are great for processing extremely large data sets, they’re mostly useful as platforms for people Who Know What They’re Doing (TM), ie. math and science PhDs and analytics groups within larger companies. But really, the search and presentation layers are, to me, where the interesting work is taking place these days: it’s where Joe and Jane programmer and operations person are going to make their mark on their organization. And many of the modern tools for data presentation can take data sets from a number of sources: log data, JSON, various forms of XML, event data piped directly over sockets or some other forwarding mechanism. This is why there’s a burgeoning market around tools that integrate with Hadoop and other platforms.

There’s one aspect of data search presentation layers that has largely gone unmentioned. Everyone tends to focus on the software, and if it’s open source, that gets a strong mention. No one, however, seems to focus on the parts that are most important: data formats, data availability and data reuse. The best part about open source analytics tools is that, by definition, the data outputs must also be openly defined and available for consumption by other tools and platforms. This is in stark contrast to traditional systems management tools and even some modern ones. The most exciting premise of open source tooling in this area is the freedom from the dreaded data roach motel model, where data goes in, but it doesn’t come out unless you pay for the privilege of accessing the data you already own. Recently, I’ve taken to calling it the “skunky data model” and imploring people to “de-skunk their data.”

Last year, the Red Hat Storage folks came up with the tag line of “Liberate Your Information.” Yes, I know, it sounds hokey and like marketing double-speak, but the concept is very real. There are, today, many users, developers and customers trapped in the data roach motel and cannot get out, because they made the (poor) decision to go with a vendor that didn’t have their needs in mind. It would seem that the best way to prevent this outcome is to go with an open source solution, because again, by definition, it is impossible to create an open source solution that creates proprietary data – because the source is open to the world, it would be impossible to hide how the data is indexed, managed, and stored.

In the past, one of the problems is that there simply weren’t a whole lot of choices for would-be customers. Luckily, we now have a wealth of options to choose from. As always, I recommend that those looking for solutions in this area go with a vendor that has their interests at heart. Go with a vendor that will allow you to access your data on your terms. Go with a vendor that gives you the means to fire them if they’re not a good partner for you. I think it’s no exaggeration to say that the only way to guarantee this freedom is to go with an open source solution.

Further reading:

 

by on October 17, 2013

Gluster Community Congratulates OpenStack Developers on Havana Release

The Gluster Community would like to congratulate the OpenStack Foundation and developers on the Havana release. With performance-boosting enhancements for OpenStack Block Storage (Cinder), Compute (Nova) and Image Service (Glance), as well as a native template language for OpenStack Orchestration (Heat), the OpenStack Havana release points the way to continued momentum for the OpenStack community. The many storage-related features in the Havana release coupled with the growing scope of typical OpenStack deployments demonstrate the need for scale-out, open software-defined storage solutions. The fusion of GlusterFS open software-defined storage with OpenStack software is a match made in cloud heaven.

Naturally, the Gluster Community would like to focus on OpenStack enhancements that pertain directly to our universe:

  • OpenStack Image Service (Glance)
    • OpenStack Cinder can now be used as a block-storage back-end for the Image Service. For Gluster users, this means that Glance can point to the same image as Cinder, which means it is not necessary to copy the entire image before deploying, saving some valuable time.
  • OpenStack Compute (Nova)
    • OpenStack integration with GlusterFS utilizing the QEMU/libgfapi integration reduces the kernel space to user space context switching to significantly boost performance.
    • When connecting to NFS or GlusterFS backed volumes, Nova now uses the mount options set in the Cinder configuration. Previously, the mount options had to be set on each Compute node that would access the volumes. This allows operators to more easily automate the scaling of their storage platforms.
    • QEMU-assisted snapshotting is now used to provide the ability to create cinder volume snapshots, including GlusterFS.
  • OpenStack Orchestration (Heat)
    • Initial support for native template language (HOT). For OpenStack operators, this presents an easier way to orchestrate services in application stacks.
  • OpenStack Object Storage (Swift)
    • There is nothing in the OpenStack Havana release notes pertaining to GlusterFS and Swift integration but we always like to talk about the fruits of our collaboration with Swift developers. We are dedicated to using the upstream Swift project API/proxy layer in our integration, and the Swift team has been a pleasure to work with, so kudos to them.
  • OpenStack Data processing (Savanna)
    • This incubating project enables users to easily provision and manage Apache Hadoop clusters on OpenStack. It’s a joint project between Red Hat, Mirantis and HortonWorks and points the way towards “Analytics as a Service”. It’s not an official part of OpenStack releases yet, but it’s come very far very quickly, and we’re excited about the data processing power it will spur.

To give an idea of the performance improvements in the GlusterFS-QEMU integration that Nova now takes advantage of, consider the early benchmarks below published by Bharata Rao, a developer at IBM’s Linux Technology Center.

 

FIO READ numbers

aggrb (KB/s) minb (KB/s) maxb (KB/s)
FUSE mount 15219 3804 5792
QEMU GlusterFS block driver (FUSE bypass) 39357 9839 12946
Base 43802 10950 12918

FIO WRITE numbers

aggrb (KB/s) minb (KB/s) maxb (KB/s)
FUSE mount 24579 6144 8423
QEMU GlusterFS block driver (FUSE bypass) 42707 10676 17262
Base 42393 10598 15646

 

“Base” refers to an operation directly on a disk filesystem.

Havana vs. Pre-Havana

This is a snapshot to show the difference between the Havanna and Grizzly releases with GlusterFS.

Grizzly Havana
Glance – Could point to the filesystem images mounted with GlusterFS, but had to copy VM image to deploy it Can now point to Cinder interface, removing the need to copy image
Cinder – Integrated with GlusterFS, but only with Fuse mounted volumes Can now use libgfapi-QEMU integration for KVM hypervisors
Nova – No integration with GlusterFS Can now use the libgfapi-QEMU integration
Swift – GlusterFS maintained a separate repository of changes to Swift proxy layer Swift patches now merged upstream, providing a cleaner break between API and implementation

 

The Orchestration feature we are excited about is not Gluster-specific, but has several touch points with GlusterFS, especially in light of the newly-introduced Manila FaaS project for OpenStack (https://launchpad.net/manila). Imagine being able to orchestrate all of your storage services with Heat, building the ultimate in scale-out cloud applications with open software-defined storage that scales with your application as needed.

We’re very excited about the Havana release and we look forward to working with the global OpenStack community on this and future releases. Download the latest GlusterFS version, GlusterFS 3.4, from the Gluster Community at gluster.org, and check out the performance with a GlusterFS 3.4-backed OpenStack cloud.

by on October 16, 2013

Automated Hadoop Deployment on GlusterFS with Apache Ambari

The glusterfs-hadoop team is pleased to announce that the Apache Ambari project now supports the automated deployment and configuration of Hadoop on top of GlusterFS.

What is Apache Ambari?

Apache Ambari is a browser based Hadoop Management Web UI that is used to provision, manage and monitor Hadoop clusters. Once Apache Ambari is installed on a management server a user can use it to select a particular Hadoop stack to deploy on a group of servers. In addition, the user can specify which services within the stack they want deployed as well as the appropriate configurations for each of the services. Up until now, Apache Ambari has only supported the automated deployment and configuration of Hortonworks Data Plaform stacks on top of the Hadoop Distributed FileSystem (HDFS).

Deploying HDP 1.3.2 on GlusterFS within Apache Ambari

Over the last several months a number of engineers from Hortonworks and Red Hat have collaborated within the Apache Ambari Incubator project to modify the core HDP 1.3.2 stack to provide users the choice of either HDFS or GlusterFS. Should one select GlusterFS, the Hadoop distribution is then configured to use the Hadoop FileSystem plug-in for GlusterFS.

Prior to the GlusterFS support in Ambari, one had to separately download Apache Hadoop and configure it to use the glusterfs-hadoop Hadoop FileSystem plugin in order to get Hadoop to run on GlusterFS. All of these steps are now automated.

Figure 1 – Users can select a stack that includes the GlusterFS Hadoop FileSystem

stack

Figure 2 – Users can choose whether they want HDFS or GlusterFS as the Hadoop FileSystem

services

In order to take advantage of this new feature please follow the instructions on the glusterfs-hadoop project wiki.

So what’s next?

The Apache Ambari project is currently working on a re-architecture of the stack definition in order to support the ability to arbitrarily define and extend Ambari stacks. This should go a long way to enabling broader support for Hadoop Compatible FileSystems and improving Hadoop Interoperability.

Lastly, at the time of writing, Apache Ambari only works on RHEL, CentOS, OEL and SLES. Thus, we are we’ve also been putting some time in getting Apache Ambari working on Fedora so that the Fedora community has access to it. This should also make integration with the existing glusterfs-hadoop and related projects a lot simpler.