all posts tagged openshift


by on August 26, 2016

Possible configurations of GlusterFS in Kubernetes/OpenShift setup

In previous blog posts we discussed, how to use GlusterFS as a persistent storage in Kubernetes and Openshift. In nutshell, the GlusterFS can be deployed/used in a kubernetes/openshift environment as : *) Contenarized GlusterFS ( Pod ) *) GlusterFS as Openshift service and Endpoint (Service and Endpoint). *) GlusterFS volume as Persistent Volume (PV) and using GlusterFS volume plugin to bind this PV to a Persistent Volume Claim ( PVC) *) GlusterFS template to deploy GlusterFS pods in an Openshift Environment. All the configuration files that can be used to deploy GlusterFS can be found @ github.com/humblec/glusterfs-kubernetes-openshift/ or github.com/gluster/glusterfs-kubernetes-openshift. Lets see how to use these files to deploy GlusterFS in kubernetes and Openshift. We will start with Deploying GlusterFS pods in an Openshift/Kubernetes Environment. Deploying GlusterFS Pod:
[Update] The pod file is renamed to gluster-pod.yaml in the mentioned repo. More details about Gluster Containers can be found @http://www.slideshare.net/HumbleChirammal/gluster-containers
GlusterFS pods can be deployed in Kubernetes/Openshift, so that Gluster Nodes are deployed in containers and it can provide persistent storage for Openshift/Kubernetes setup. The examples files in this repo are used for this demo. Step 1: Create GlusterFS pod [root@atomic-node2 gluster_pod]# oc create -f gluster-1.yaml Step 2: Get details about the GlusterFS pod. [root@atomic-node2 gluster_pod]# oc describe pod gluster-1 Name: gluster-1 Namespace: default Image(s): gluster/gluster-centos Node: atomic-node1/10.70.43.174 Start Time: Tue, 17 May 2016 10:19:17 +0530 Labels: name=gluster-1 Status: Running Reason: Message: IP: 10.70.43.174 Replication Controllers: Containers: glusterfs: Container ID: docker://ff8f4af700d725dfe0e08939ec011c34ddf9dedc7204e0ced1cc355a56150742 Image: gluster/gluster-centos Image ID: docker://033de9c44a8aabde55ce8a2b751ccf5bc345fdb534ea30e79a8fa70b82dc7761 QoS Tier: cpu: BestEffort memory: BestEffort State: Running Started: Tue, 17 May 2016 10:20:35 +0530 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready True Volumes: brickpath: Type: HostPath (bare host directory volume) Path: /mnt/brick1 default-token-72d89: Type: Secret (a secret that should populate this volume) SecretName: default-token-72d89 Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 1m 1m 1 {scheduler } Scheduled Successfully assigned gluster-1 to atomic-node1 1m 1m 1 {kubelet atomic-node1} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.6" already present on machine 1m 1m 1 {kubelet atomic-node1} implicitly required container POD Created Created with docker id f55ce55e6ea3 1m 1m 1 {kubelet atomic-node1} implicitly required container POD Started Started with docker id f55ce55e6ea3 1m 1m 1 {kubelet atomic-node1} spec.containers{glusterfs} Pulling pulling image "gluster/gluster-centos" 8s 8s 1 {kubelet atomic-node1} spec.containers{glusterfs} Pulled Successfully pulled image "gluster/gluster-centos" 8s 8s 1 {kubelet atomic-node1} spec.containers{glusterfs} Created Created with docker id ff8f4af700d7 8s 8s 1 {kubelet atomic-node1} spec.containers{glusterfs} Started Started with docker id ff8f4af700d7 From above logs, you can see it pulled `gluster/gluster-centos` container image and deployed containers from it. [root@atomic-node2 gluster_pod]# oc get pods NAME READY STATUS RESTARTS AGE gluster-1 1/1 Running 0 1m Examine the container and make sure it has a running GlusterFS daemon. [root@atomic-node2 gluster_pod]# oc exec -ti gluster-1 /bin/bash Examine the processes running in this container and the `glusterd` service information. [root@atomic-node1 /]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.4 0.0 40780 2920 ? Ss 04:50 0:00 /usr/sbin/init root 20 0.3 0.0 36816 4272 ? Ss 04:50 0:00 /usr/lib/syste root 21 0.0 0.0 118476 1332 ? Ss 04:50 0:00 /usr/sbin/lvme root 37 0.0 0.0 101344 1228 ? Ssl 04:50 0:00 /usr/sbin/gssp rpc 44 0.1 0.0 64904 1052 ? Ss 04:50 0:00 /sbin/rpcbind root 209 0.1 0.1 364716 13444 ? Ssl 04:50 0:00 /usr/sbin/glus root 341 1.1 0.0 13368 1964 ? Ss 04:51 0:00 /bin/bash root 354 0.0 0.0 49020 1820 ? R+ 04:51 0:00 ps aux [root@atomic-node1 /]# service glusterd status Redirecting to /bin/systemctl status glusterd.service ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2016-05-17 04:50:41 UTC; 35s ago Process: 208 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 209 (glusterd) CGroup: /system.slice/docker-ff8f4af700d725dfe0e08939ec011c34ddf9dedc7204e0ced1cc355a56150742.scope/system.slice/glusterd.service └─209 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO... ‣ 209 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO... May 17 04:50:36 atomic-node1 systemd[1]: Starting Gluste... May 17 04:50:41 atomic-node1 systemd[1]: Started Gluster... Hint: Some lines were ellipsized, use -l to show in full. Let's fetch some more details about GlusterFS in this container. [root@atomic-node1 /]# gluster --version glusterfs 3.7.9 built on Mar 20 2016 03:19:49 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@atomic-node1 /]# [root@atomic-node1 /]# mount |grep mnt /dev/mapper/atomic-node1-root on /mnt/brick1 type xfs (rw,relatime,seclabel,attr2,inode64,noquota) This container is built on top of CentOS base image as shown below. [root@atomic-node1 /]# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) [root@atomic-node1 /]# In this article we discussed, how to run GlusterFS as a pod in Kubernetes or Openshift setup. [Part 2] covers `how to use GlusterFS as a service, Persistent Volume for a Persistent Volume Claim`. [Part 3] covers `how to use GlusterFS template to deploy GlusterFS pods in an Openshift/kubernetes setup`.
by on August 24, 2016

[Coming Soon] Dynamic Provisioning of GlusterFS volumes in Kubernetes/Openshift!!

In this context I am talking about the dynamic provisioning capability of ‘glusterfs’ plugin in Kubernetes/Openshift. I have submitted a Pull Request to Kubernetes to add this functionality for GlusterFS. At present, there is no existing network storage provisioners in kubernetes eventhough there are cloud providers. The idea here is to make the glusterfs plugin capable of provisioning volumes on demand from kubernetes/openshift .. Cool, Isnt it ? Indeed this is a nice feature to have. That said, an OSE user request for a space for example : 20G and the glusterfs plugin takes this request and create 20G and bound that to the claim. The plugin can use any REST service, but the example patch is based on ‘heketi’. Here is the workflow: Start your kubernetes controller manager with highlighted options:

 ...kube controller-manager --v=3 
 --service-account-private-key-file=/tmp/kube-serviceaccount.key
 --root-ca-file=/var/run/kubernetes/apiserver.crt --enable-hostpath-provisioner=false

 --enable-network-storage-provisioner=true --storage-config=/tmp --net-provider=glusterfs
 --pvclaimbinder-sync-period=15s --cloud-provider= --master=127.0.0.1:8080

 
Create a file called `gluster.json` in `/tmp` directory. The important fields in this config file are ‘endpoint’ and ‘resturl’. The endpoint has to be defined and match the setup. The `resturl` has been filled with the rest service which can take the input and create a gluster volume in the backend. As mentioned earlier I am using `heketi` for the same.

 [hchiramm@dhcp35-111 tmp]$ cat gluster.json
 {
 "endpoint": "glusterfs-cluster",
 "resturl": "http://127.0.0.1:8081",
 "restauthenabled":false,
 "restuser":"",
 "restuserkey":""
 }
 [hchiramm@dhcp35-111 tmp]$
 

We have to define an ENDPOINT and SERVICE. Below are the example configuration files. ENDPOINT : “ip” has to be filled with your gluster trusted pool IP.


[hchiramm@dhcp35-111 ]$ cat glusterfs-endpoint.json
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "10.36.4.112"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "10.36.4.112"
}
],
"ports": [
{
"port": 1
}
]
}
]
}

SERVICE: Please note that the Service Name is matching with ENDPOINT name


[hchiramm@dhcp35-111 ]$ cat gluster-service.json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"ports": [
{"port": 1}
]
}
}
[hchiramm@dhcp35-111 ]$

Finally we have a Persistent Volume Claim file as shown below: NOTE: The size of the volume is mentioned as ’20G’:


[hchiramm@dhcp35-111 ]$ cat gluster-pvc.json
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "glusterc",
"annotations": {
"volume.alpha.kubernetes.io/storage-class": "glusterfs"
}
},
"spec": {
"accessModes": [
"ReadOnlyMany"
],
"resources": {
"requests": {
"storage": "20Gi"
}
}
}
}
[hchiramm@dhcp35-111 ]$

Let's start defining the endpoint, service and PVC.


[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-endpoint.json
endpoints "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl create -f gluster-service.json
service "glusterfs-cluster" created
[hchiramm@dhcp35-111 ]$ ./kubectl get ep,service
NAME ENDPOINTS AGE
ep/glusterfs-cluster 10.36.6.105:1 14s
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/glusterfs-cluster 10.0.0.10 1/TCP 9s
svc/kubernetes 10.0.0.1 443/TCP 13m
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
[hchiramm@dhcp35-111 ]$

Now, let's request a claim!

[hchiramm@dhcp35-111 ]$ ./kubectl create -f glusterfs-pvc.json
persistentvolumeclaim "glusterc" created
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv/pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c  20Gi ROX  Bound  default/glusterc 2s
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/glusterc Bound pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c 0 3s
[hchiramm@dhcp35-111 ]$

Awesome! Based on the request it created a PV and BOUND to the PVClaim!!


[hchiramm@dhcp35-111 ]$ ./kubectl describe pv pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Name: pvc-39ebcdc5-442b-11e6-8dfa-54ee7551fd0c
Labels:
Status: Bound
Claim: default/glusterc
Reclaim Policy: Delete
Access Modes: ROX
Capacity: 20Gi
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-cluster
 Path: vol_038b56756f4e3ab4b07a87494097941c
ReadOnly: false
No events.
[hchiramm@dhcp35-111 ]$
 

Verify the volume exist in backend:

 [root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c
 038b56756f4e3ab4b07a87494097941c
 [root@ ~]#

Let's delete the PV claim --


[hchiramm@dhcp35-111 ]$ ./kubectl delete pvc glusterc
persistentvolumeclaim "glusterc" deleted
[hchiramm@dhcp35-111 ]$ ./kubectl get pv,pvc
[hchiramm@dhcp35-111 ]$

It got deleted! Verify it from backend:


 [root@ ~]# heketi-cli volume list |grep 038b56756f4e3ab4b07a87494097941c
 [root@ ~]# 

We can use the Volume for app pods by referring the claim name. Hope this is a nice feature to have !

Please let me know if you have any comments/suggestions.

Also, the patch - https://github.com/kubernetes/kubernetes/pull/30888 is undergoing review in upstream as mentioned earlier and hopefully it will make it soon to the kubernetes release. I will provide an update here as soon as its available in upstream.

by on March 29, 2016

Persistent Volume and Claim in OpenShift and Kubernetes using GlusterFS Volume Plugin

OpenShift is a platform as a service product from Red Hat. The software that runs the service is open-sourced under the name OpenShift Origin, and is available on GitHub.

OpenShift v3 is a layered system designed to expose underlying Docker and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer. For example, install Ruby, push code, and add MySQL.

Docker is an open platform for developing, shipping, and running applications. With Docker you can separate your applications from your infrastructure and treat your infrastructure like a managed application. Docker does this by combining kernel containerization features with workflows and tooling that help you manage and deploy your applications. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. Available on GitHub.

Kubernetes is an open-source system for automating deployment, operations, and scaling of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon a decade and a half of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Available on GitHub.

GlusterFS is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. GlusterFS is free and open source software. Available on GitHub.

Hope you know a little bit of all the above Technologies, now we jump right into our topic which is Persistent Volume and Persistent volume claim in Kubernetes and Openshift v3 using GlusterFS volume. So what is Persistent Volume? Why do we need it? How does it work using GlusterFS Volume Plugin?

In Kubernetes, Managing storage is a distinct problem from managing compute. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this we introduce two new API resources in kubernetes: PersistentVolume and PersistentVolumeClaim.

A PersistentVolume (PV) is a piece of networked storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).

In simple words, Containers in Kubernetes Cluster need some storage which should be persistent even if the container goes down or no longer needed. So Kubernetes Administrator creates a Storage(GlusterFS storage, In this case) and creates a PV for that storage. When a Developer (Kubernetes cluster user) needs a Persistent Volume in a container, creates a Persistent Volume claim. Persistent Volume Claim will contain the options which Developer needs in the pods. So from list of Persistent Volume the best match is selected for the claim and Binded to the claim. Now the developer can use the claim in the pods.


Prerequisites:

Need a Kubernetes or Openshift cluster, My setup is one master and three nodes.

Note: you can use kubectl in place of oc, oc is openshift controller which is a wrapper around kubectl. I am not sure about the difference.


#oc get nodes
NAME LABELS STATUS AGE
dhcp42-144.example.com kubernetes.io/hostname=dhcp42-144.example.com,name=node3 Ready 15d
dhcp42-235.example.com kubernetes.io/hostname=dhcp42-235.example.com,name=node1 Ready 15d
dhcp43-174.example.com kubernetes.io/hostname=dhcp43-174.example.com,name=node2 Ready 15d
dhcp43-183.example.com kubernetes.io/hostname=dhcp43-183.example.com,name=master Ready,SchedulingDisabled 15d

2) Have a GlusterFS cluster setup, Create a GlusterFS Volume and start the GlusterFS volume.

# gluster v status
Status of volume: gluster_vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 170.22.42.84:/gluster_brick 49152 0 Y 8771
Brick 170.22.43.77:/gluster_brick 49152 0 Y 7443
NFS Server on localhost 2049 0 Y 7463
NFS Server on 170.22.42.84 2049 0 Y 8792
Task Status of Volume gluster_vol
------------------------------------------------------------------------------
There are no active volume tasks

3) All nodes in kubernetes cluster must have GlusterFS-Client Package installed.

Now we have the prerequisites o/ …

In Kube-master administrator has to write required yaml file which will be given as input to the kube cluster.

There are three files to be written by administrator and one by Developer.

Service
Service Keeps the endpoint to be persistent or active.
Endpoint
Endpoint is the file which points to the GlusterFS cluster location.
PV
PV is Persistent Volume where the administrator will define the gluster volume name, capacity of volume and access mode.
PVC
PVC is persistent volume claim where developer defines the type of storage as needed.

STEP 1: Create a service for the gluster volume.


# cat gluster_pod/gluster-service.yaml
apiVersion: "v1"
kind: "Service"
metadata:
name: "glusterfs-cluster"
spec:
ports:
- port: 1
# oc create -f gluster_pod/gluster-service.yaml
service "glusterfs-cluster" created

Verify:

# oc get service
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
glusterfs-cluster 172.30.251.13 1/TCP 9m
kubernetes 172.30.0.1 443/TCP,53/UDP,53/TCP 16d

STEP 2: Create an Endpoint for the gluster service

# cat gluster_pod/gluster-endpoints.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 170.22.43.77
ports:
- port: 1

The ip here is the glusterfs cluster ip.


# oc create -f gluster_pod/gluster-endpoints.yaml
endpoints "glusterfs-cluster" created
# oc get endpoints
NAME ENDPOINTS AGE
glusterfs-cluster 170.22.43.77:1 3m
kubernetes 170.22.43.183:8053,170.22.43.183:8443,170.22.43.183:8053 16d

STEP 3: Create a PV for the gluster volume.

# cat gluster_pod/gluster-pv.yaml
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "gluster-default-volume"
spec:
capacity:
storage: "8Gi"
accessModes:
- "ReadWriteMany"
glusterfs:
endpoints: "glusterfs-cluster"
path: "gluster_vol"
readOnly: false
persistentVolumeReclaimPolicy: "Recycle"

Note : path here is the gluster volume name. Access mode specifies the way to access the volume. Capacity has the storage size of the GlusterFS volume.


# oc create -f gluster_pod/gluster-pv.yaml
persistentvolume "gluster-default-volume" created
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-default-volume 8Gi RWX Available 36s

STEP 4: Create a PVC for the gluster PV.


# cat gluster_pod/gluster-pvc.yaml
apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
name: "glusterfs-claim"
spec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: "8Gi"

Note: the Developer request for 8 Gb of storage with access mode rwx.


# oc create -f gluster_pod/gluster-pvc.yaml
persistentvolumeclaim "glusterfs-claim" created
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
glusterfs-claim Bound gluster-default-volume 8Gi RWX 14s

Here the pvc is bounded as soon as created, because it found the PV that satisfies the requirement. Now lets go and check the pv status


# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-default-volume 8Gi RWX Bound default/glusterfs-claim 5m

See now the PV has been bound to “default/glusterfs-claim”. In this state developer has the Persistent Volume Claim bounded successfully, now the developer can use the pv claim like below.

STEP 5: Use the persistent Volume Claim in a Pod defined by the Developer.


# cat gluster_pod/gluster_pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mygluster
image: ashiq/gluster-client
command: ["/usr/sbin/init"]
volumeMounts:
- mountPath: "/home"
name: gluster-default-volume
volumes:
- name: gluster-default-volume
persistentVolumeClaim:
claimName: glusterfs-claim

The above pod definition will pull the ashiq/gluster-client image(some private image) and start init script. The gluster volume will be mounted on the host machine by the GlusterFS volume Plugin available in the kubernetes and then bind mounted to the container’s /home. So all the Kubernetes cluster nodes must have glusterfs-client packages.

Lets try running.


# oc create -f gluster_pod/fedora_pod.yaml
pod "mypod" created
# oc get pods
NAME READY STATUS RESTARTS AGE
mypod 1/1 Running 0 1m

Wow its running… lets go and check where it is running.

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ec57d62e3837 ashiq/gluster-client "/usr/sbin/init" 4 minutes ago Up 4 minutes k8s_myfedora.dc1f7d7a_mypod_default_5d301443-ec20-11e5-9076-5254002e937b_ed2eb8e5
1439dd72fb1d openshift3/ose-pod:v3.1.1.6 "/pod" 4 minutes ago Up 4 minutes k8s_POD.e071dbf6_mypod_default_5d301443-ec20-11e5-9076-5254002e937b_4d6a7afb

Found the Pod running successfully on one of the Kubernetes node.

On the host:


# df -h | grep gluster_vol
170.22.43.77:gluster_vol 35G 4.0G 31G 12% /var/lib/origin/openshift.local.volumes/pods/5d301443-ec20-11e5-9076-5254002e937b/volumes/kubernetes.io~glusterfs/gluster-default-volume

I can see the gluster volume being mounted on the host o/. Lets check inside the container. Note the random number is the container-id from the docker ps command.


# docker exec -it ec57d62e3837 /bin/bash
[root@mypod /]# df -h | grep gluster_vol
170.22.43.77:gluster_vol 35G 4.0G 31G 12% /home

Yippy the GlusterFS volume has been mounted inside the container on /home as mentioned in the pod definition. Lets try writing something to it


[root@mypod /]# mkdir /home/ashiq
[root@mypod /]# ls /home/
ashiq

Since the AccessMode is RWX I am able to write to the mount point.

That’s all Folks.

Author: Mohamed Ashiq

by on October 18, 2014

Hacking out an Openshift app

I had an itch to scratch, and I wanted to get a bit more familiar with Openshift. I had used it in the past, but it was time to have another go. The app and the code are now available. Feel free to check out:

https://pdfdoc-purpleidea.rhcloud.com/

This is a simple app that takes the URL of a markdown file on GitHub, and outputs a pandoc converted PDF. I wanted to use pandoc specifically, because it produces PDF’s that were beautifully created with LaTeX. To embed a link in your upstream documentation that points to a PDF, just append the file’s URL to this app’s url, under a /pdf/ path. For example:

https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md

will send you to a PDF of the puppet-gluster documentation. This will make it easier to accept questions as FAQ patches, without needing to have the git embedded binary PDF be constantly updated.

If you want to hear more about what I did, read on…

The setup:

Start by getting a free Openshift account. You’ll also want to install the client tools. Nothing is worse than having to interact with your app via a web interface. Hackers use terminals. Lucky, the Openshift team knows this, and they’ve created a great command line tool called rhc to make it all possible.

I started by following their instructions:

$ sudo yum install rubygem-rhc
$ sudo gem update rhc

Unfortunately, this left with a problem:

$ rhc
/usr/share/rubygems/rubygems/dependency.rb:298:in `to_specs': Could not find 'rhc' (>= 0) among 37 total gem(s) (Gem::LoadError)
    from /usr/share/rubygems/rubygems/dependency.rb:309:in `to_spec'
    from /usr/share/rubygems/rubygems/core_ext/kernel_gem.rb:47:in `gem'
    from /usr/local/bin/rhc:22:in `'

I solved this by running:

$ gem install rhc

Which makes my user rhc to take precedence over the system one. Then run:

$ rhc setup

and the rhc client will take you through some setup steps such as uploading your public ssh key to the Openshift infrastructure. The beauty of this tool is that it will work with the Red Hat hosted infrastructure, or you can use it with your own infrastructure if you want to host your own Openshift servers. This alone means you’ll never get locked in to a third-party providers terms or pricing.

Create a new app:

To get a fresh python 3.3 app going, you can run:

$ rhc create-app <appname> python-3.3

From this point on, it’s fairly straight forward, and you can now hack your way through the app in python. To push a new version of your app into production, it’s just a git commit away:

$ git add -p && git commit -m 'Awesome new commit...' && git push && rhc tail

Creating a new app from existing code:

If you want to push a new app from an existing code base, it’s as easy as:

$ rhc create-app awesomesauce python-3.3 --from-code https://github.com/purpleidea/pdfdoc
Application Options
-------------------
Domain:      purpleidea
Cartridges:  python-3.3
Source Code: https://github.com/purpleidea/pdfdoc
Gear Size:   default
Scaling:     no

Creating application 'awesomesauce' ... done


Waiting for your DNS name to be available ... done

Cloning into 'awesomesauce'...
The authenticity of host 'awesomesauce-purpleidea.rhcloud.com (203.0.113.13)' can't be established.
RSA key fingerprint is 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'awesomesauce-purpleidea.rhcloud.com,203.0.113.13' (RSA) to the list of known hosts.

Your application 'awesomesauce' is now available.

  URL:        http://awesomesauce-purpleidea.rhcloud.com/
  SSH to:     00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com
  Git remote: ssh://00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com/~/git/awesomesauce.git/
  Cloned to:  /home/james/code/awesomesauce

Run 'rhc show-app awesomesauce' for more details about your app.

In my case, my app also needs some binaries installed. I haven’t yet automated this process, but I think it can be done be creating a custom cartridge. Help to do this would be appreciated!

Updating your app:

In the case of an app that I already deployed with this method, updating it from the upstream source is quite easy. You just pull down and relevant commits, and then push them up to your app’s git repo:

$ git pull upstream master 
From https://github.com/purpleidea/pdfdoc
 * branch            master     -> FETCH_HEAD
Updating 5ac5577..bdf9601
Fast-forward
 wsgi.py | 2 --
 1 file changed, 2 deletions(-)
$ git push origin master 
Counting objects: 7, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 312 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Stopping Python 3.3 cartridge
remote: Waiting for stop to finish
remote: Waiting for stop to finish
remote: Building git ref 'master', commit bdf9601
remote: Activating virtenv
remote: Checking for pip dependency listed in requirements.txt file..
remote: You must give at least one requirement to install (see "pip help install")
remote: Running setup.py script..
remote: running develop
remote: running egg_info
remote: creating pdfdoc.egg-info
remote: writing pdfdoc.egg-info/PKG-INFO
remote: writing dependency_links to pdfdoc.egg-info/dependency_links.txt
remote: writing top-level names to pdfdoc.egg-info/top_level.txt
remote: writing manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: reading manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: writing manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: running build_ext
remote: Creating /var/lib/openshift/00112233445566778899aabb/app-root/runtime/dependencies/python/virtenv/venv/lib/python3.3/site-packages/pdfdoc.egg-link (link to .)
remote: pdfdoc 0.0.1 is already the active version in easy-install.pth
remote: 
remote: Installed /var/lib/openshift/00112233445566778899aabb/app-root/runtime/repo
remote: Processing dependencies for pdfdoc==0.0.1
remote: Finished processing dependencies for pdfdoc==0.0.1
remote: Preparing build for deployment
remote: Deployment id is 9c2ee03c
remote: Activating deployment
remote: Starting Python 3.3 cartridge (Apache+mod_wsgi)
remote: Application directory "/" selected as DocumentRoot
remote: Application "wsgi.py" selected as default WSGI entry point
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com/~/git/awesomesauce.git/
   5ac5577..bdf9601  master -> master
$

Final thoughts:

I hope this helped you getting going with Openshift. Feel free to send me patches!

Happy hacking!

James


by on September 18, 2013

Linuxcon day two, Tuesday

Continuing on from yesterday, I’ve met even more interesting people. I chatted with Dianne Mueller about some interesting ideas for gluster+openshift. More to come on that front soon. Hung out with Jono Bacon and talked a bit about puppet-gluster on Ubuntu. If there is interest in the community for this, please let me know. Thanks to John Mark Walker and RedHat for sponsoring me and introducing me to many of these folks. Hello to all the others that I didn’t mention.

On the hacking side of things, I added proper xml parsing, and a lot of work on fancier firewalling to puppet-gluster. At the moment, here’s how the firewall support works:

  1. Initially, each host doesn’t know about the other nodes.
  2. Puppet runs and each host exports host information to each other node. This opens up the firewall for glusterd so that the hosts can peer.
  3. Now that we know which hosts are in a common pool, we can open up the firewall for each volume’s bricks. Since the volume has not yet been started (or even created) we can’t know which ports are needed, so all incoming ports are permitted from other gluster nodes.
  4. Once the volume is created, and started, the TCP port information will be available, and can be consumed as facts. These facts then refine the previously defined firewall rules, to only allow the needed ports.
  5. Your white-listed firewall setup is now complete.
  6. For users who wish to avoid using this module to configure your firewall, you can set shorewall => false in your gluster::server class. If you want to specify the allowed ip access control manually, that is possible too.

I hope you find this useful. I know I do. Let me know, and

Happy Hacking,

James