all posts tagged Python


by on October 18, 2014

Hacking out an Openshift app

I had an itch to scratch, and I wanted to get a bit more familiar with Openshift. I had used it in the past, but it was time to have another go. The app and the code are now available. Feel free to check out:

https://pdfdoc-purpleidea.rhcloud.com/

This is a simple app that takes the URL of a markdown file on GitHub, and outputs a pandoc converted PDF. I wanted to use pandoc specifically, because it produces PDF’s that were beautifully created with LaTeX. To embed a link in your upstream documentation that points to a PDF, just append the file’s URL to this app’s url, under a /pdf/ path. For example:

https://pdfdoc-purpleidea.rhcloud.com/pdf/https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md

will send you to a PDF of the puppet-gluster documentation. This will make it easier to accept questions as FAQ patches, without needing to have the git embedded binary PDF be constantly updated.

If you want to hear more about what I did, read on…

The setup:

Start by getting a free Openshift account. You’ll also want to install the client tools. Nothing is worse than having to interact with your app via a web interface. Hackers use terminals. Lucky, the Openshift team knows this, and they’ve created a great command line tool called rhc to make it all possible.

I started by following their instructions:

$ sudo yum install rubygem-rhc
$ sudo gem update rhc

Unfortunately, this left with a problem:

$ rhc
/usr/share/rubygems/rubygems/dependency.rb:298:in `to_specs': Could not find 'rhc' (>= 0) among 37 total gem(s) (Gem::LoadError)
    from /usr/share/rubygems/rubygems/dependency.rb:309:in `to_spec'
    from /usr/share/rubygems/rubygems/core_ext/kernel_gem.rb:47:in `gem'
    from /usr/local/bin/rhc:22:in `'

I solved this by running:

$ gem install rhc

Which makes my user rhc to take precedence over the system one. Then run:

$ rhc setup

and the rhc client will take you through some setup steps such as uploading your public ssh key to the Openshift infrastructure. The beauty of this tool is that it will work with the Red Hat hosted infrastructure, or you can use it with your own infrastructure if you want to host your own Openshift servers. This alone means you’ll never get locked in to a third-party providers terms or pricing.

Create a new app:

To get a fresh python 3.3 app going, you can run:

$ rhc create-app <appname> python-3.3

From this point on, it’s fairly straight forward, and you can now hack your way through the app in python. To push a new version of your app into production, it’s just a git commit away:

$ git add -p && git commit -m 'Awesome new commit...' && git push && rhc tail

Creating a new app from existing code:

If you want to push a new app from an existing code base, it’s as easy as:

$ rhc create-app awesomesauce python-3.3 --from-code https://github.com/purpleidea/pdfdoc
Application Options
-------------------
Domain:      purpleidea
Cartridges:  python-3.3
Source Code: https://github.com/purpleidea/pdfdoc
Gear Size:   default
Scaling:     no

Creating application 'awesomesauce' ... done


Waiting for your DNS name to be available ... done

Cloning into 'awesomesauce'...
The authenticity of host 'awesomesauce-purpleidea.rhcloud.com (203.0.113.13)' can't be established.
RSA key fingerprint is 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'awesomesauce-purpleidea.rhcloud.com,203.0.113.13' (RSA) to the list of known hosts.

Your application 'awesomesauce' is now available.

  URL:        http://awesomesauce-purpleidea.rhcloud.com/
  SSH to:     00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com
  Git remote: ssh://00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com/~/git/awesomesauce.git/
  Cloned to:  /home/james/code/awesomesauce

Run 'rhc show-app awesomesauce' for more details about your app.

In my case, my app also needs some binaries installed. I haven’t yet automated this process, but I think it can be done be creating a custom cartridge. Help to do this would be appreciated!

Updating your app:

In the case of an app that I already deployed with this method, updating it from the upstream source is quite easy. You just pull down and relevant commits, and then push them up to your app’s git repo:

$ git pull upstream master 
From https://github.com/purpleidea/pdfdoc
 * branch            master     -> FETCH_HEAD
Updating 5ac5577..bdf9601
Fast-forward
 wsgi.py | 2 --
 1 file changed, 2 deletions(-)
$ git push origin master 
Counting objects: 7, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 312 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: Stopping Python 3.3 cartridge
remote: Waiting for stop to finish
remote: Waiting for stop to finish
remote: Building git ref 'master', commit bdf9601
remote: Activating virtenv
remote: Checking for pip dependency listed in requirements.txt file..
remote: You must give at least one requirement to install (see "pip help install")
remote: Running setup.py script..
remote: running develop
remote: running egg_info
remote: creating pdfdoc.egg-info
remote: writing pdfdoc.egg-info/PKG-INFO
remote: writing dependency_links to pdfdoc.egg-info/dependency_links.txt
remote: writing top-level names to pdfdoc.egg-info/top_level.txt
remote: writing manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: reading manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: writing manifest file 'pdfdoc.egg-info/SOURCES.txt'
remote: running build_ext
remote: Creating /var/lib/openshift/00112233445566778899aabb/app-root/runtime/dependencies/python/virtenv/venv/lib/python3.3/site-packages/pdfdoc.egg-link (link to .)
remote: pdfdoc 0.0.1 is already the active version in easy-install.pth
remote: 
remote: Installed /var/lib/openshift/00112233445566778899aabb/app-root/runtime/repo
remote: Processing dependencies for pdfdoc==0.0.1
remote: Finished processing dependencies for pdfdoc==0.0.1
remote: Preparing build for deployment
remote: Deployment id is 9c2ee03c
remote: Activating deployment
remote: Starting Python 3.3 cartridge (Apache+mod_wsgi)
remote: Application directory "/" selected as DocumentRoot
remote: Application "wsgi.py" selected as default WSGI entry point
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://00112233445566778899aabb@awesomesauce-purpleidea.rhcloud.com/~/git/awesomesauce.git/
   5ac5577..bdf9601  master -> master
$

Final thoughts:

I hope this helped you getting going with Openshift. Feel free to send me patches!

Happy hacking!

James


by on March 24, 2014

Introducing Puppet Exec[‘again’]

Puppet is missing a number of much-needed features. That’s the bad news. The good news is that I’ve been able to write some of these as modules that don’t need to change the Puppet core! This is an article about one of these features.

Posit: It’s not possible to apply all of your Puppet manifests in a single run.

I believe that this holds true for the current implementation of Puppet. Most manifests can, do and should apply completely in a single run. If your Puppet run takes more than one run to converge, then chances are that you’re doing something wrong.

(For the sake of this article, convergence means that everything has been applied cleanly, and that a subsequent Puppet run wouldn’t have any work to do.)

There are some advanced corner cases, where this is not possible. In these situations, you will either have to wait for the next Puppet run (by default it will run every 30 minutes) or keep running Puppet manually until your configuration has converged. Neither of these situations are acceptable because:

  • Waiting 30 minutes while your machines are idle is (mostly) a waste of time.
  • Doing manual work to set up your automation kind of defeats the purpose.
'Are you stealing those LCDs?' 'Yeah, but I'm doing it while my code compiles.'

Waiting 30 minutes while your machines are idle is (mostly) a waste of time. Okay, maybe it’s not entirely a waste of time :)

So what’s the solution?

Introducing: Puppet Exec[‘again’] !

Exec[‘again’] is a feature which I’ve added to my Puppet-Common module.

What does it do?

Each Puppet run, your code can decide if it thinks there is more work to do, or if the host is not in a converged state. If so, it will tell Exec[‘again’].

What does Exec[‘again’] do?

Exec[‘again’] will fork a process off from the running puppet process. It will wait until that parent process has finished, and then it will spawn (technically: execvpe) a new puppet process to run puppet again. The module is smart enough to inspect the parent puppet process, and it knows how to run the child puppet. Once the new child puppet process is running, you won’t see any leftover process id from the parent Exec[‘again’] tool.

How do I tell it to run?

It’s quite simple, all you have to do is import my puppet module, and then notify the magic Exec[‘again’] type that my class defines. Example:

include common::again

$some_str = 'ttboj is awesome'
# you can notify from any type that can generate a notification!
# typically, using exec is the most common, but is not required!
file { '/tmp/foo':
    content => "${some_str}\n",
    notify => Exec['again'], # notify puppet!
}

How do I decide if I need to run again?

This depends on your module, and isn’t always a trivial thing to figure out. In one case, I had to build a finite state machine in puppet to help decide whether this was necessary or not. In some cases, the solution might be simpler. In all cases, this is an advanced technique, so you’ll probably already have a good idea about how to figure this out if you need this type of technique.

Can I introduce a minimum delay before the next run happens?

Yes, absolutely. This is particularly useful if you are building a distributed system, and you want to give other hosts a chance to export resources before each successive run. Example:

include common::again

# when notified, this will run puppet again, delta sec after it ends!
common::again::delta { 'some-name':
    delta => 120, # 2 minutes (pick your own value)
}

# to run the above Exec['again'] you can use:
exec { '/bin/true':
    onlyif => '/bin/false', # TODO: some condition
    notify => Common::Again::Delta['some-name'],
}

Can you show me a real-world example of this module?

Have a look at the Puppet-Gluster module. This module was one of the reasons that I wrote the Exec[‘again’] functionality.

Are there any caveats?

Maybe! It’s possible to cause a fast “infinite loop”, where Puppet gets run unnecessarily. This could effectively DDOS your puppetmaster if left unchecked, so please use with caution! Keep in mind that puppet typically runs in an infinite loop already, except with a 30 minute interval.

Help, it won’t stop!

Either your code has become sentient, and has decided it wants to enable kerberos or you’ve got a bug in your Puppet manifests. If you fix the bug, things should eventually go back to normal. To kill the process that’s re-spawning puppet, look for it in your process tree. Example:

[root@server ~]# ps auxww | grep again[.py]
root 4079 0.0 0.7 132700 3768 ? S 18:26 0:00 /usr/bin/python /var/lib/puppet/tmp/common/again/again.py --delta 120
[root@server ~]# killall again.py
[root@server ~]# echo $?
0
[root@server ~]# ps auxww | grep again[.py]
[root@server ~]# killall again.py
again.py: no process killed
[root@server ~]#

Does this work with puppet running as a service or with puppet agent –test?

Yes.

How was the spawn/exec logic implemented?

The spawn/exec logic was implemented as a standalone python program that gets copied to your local system, and does all the heavy lifting. Please have a look and let me know if you can find any bugs!

Conclusion

I hope you enjoyed this addition to your toolbox. Please remember to use it with care. If you have a legitimate use for it, please let me know so that I can better understand your use case!

Happy hacking,

James

 

by on January 19, 2013

Extending GlusterFS with Python (Linux Journal)

Jeff Darcy, Gluster developer extraordinaire with Red Hat, has written an article all about extending our favorite distributed storage system with Python, and he gets into a fair bit of detail with Glupy, his project for implementing new features in GlusterFS with Python. Glupy does this by utilizing GlusterFS’ established translator API, which you can read about on gluster.org.

I’ll let Jeff introduce the article in his own words:

Are you a Python programmer who wishes your storage could do more for you? Here’s an easy way to add functionality to a real distributed filesystem, in your favorite language.

And then gets into the good stuff:

As a case study in combining Python code with an existing compiled program or language, this article focuses on the implementation of a Python “translator” interface for GlusterFS. GlusterFS is a modern distributed filesystem based on the principle of horizontal scaling—adding capacity or performance to a system by adding more servers based on commodity hardware instead of having to pay an ever-increasing premium to make existing servers more powerful. Development is sponsored by Red Hat, but it’s completely open source, so anyone can contribute. In addition to horizontal scaling, another core principle of GlusterFS is modularity. Most of the functionality within GlusterFS actually is provided by translators—so called because they translate I/O calls (such as read or write) coming from the user into the same or other calls that are passed on toward storage. These calls are passed from one translator to another, arranged in an arbitrarily complex hierarchy, until eventually the lowest-level calls are executed on servers’ local filesystems. I call this interface TXAPI here for the sake of brevity, even though that’s not an official term. TXAPI has been used to implement internal GlusterFS functionality, such as replication and caching, and also external functionality, such as on-disk encryption.

I highly recommend the full article at LinuxJournal.com for your bedtime reading.