Subpixel rendering in Fedora 22

I've spent last week trying to get subpixel rendering working with fedora's freetype package, and even though it had the patch...

--- freetype-2.5.2/include/config/ftoption.h 2013-12-08 19:40:19.000000000 +0100
+++ freetype-2.5.2-enable-spr/include/config/ftoption.h 2014-01-17 19:17:34.000000000 +0100
@@ -92,7 +92,7 @@
   /* This is done to allow FreeType clients to run unmodified, forcing     */
   /* them to display normal gray-level anti-aliased glyphs.                */
   /*                                                                       */


... the fonts still looked horribly (e.g. this web page)

Today I actually read the spec file, and it started with...

# Patented subpixel rendering disabled by default.
# Pass '--with subpixel_rendering' on rpmbuild command-line to enable.
%{!?_with_subpixel_rendering: %{!?_without_subpixel_rendering: %define _without_subpixel_rendering --without-subpixel_rendering}}

insert a minute of swearing

Everybody seem to be all hyped about infinality patches, and they do make the fonts look great, but these were already merged.

rpmbuild  -bb freetype.spec --with subpixel_rendering

Even Cantarell font looks almost decent now.


Learning from Ubuntu One

Since Ubuntu One file sync server has recently been opensourced, I'd like to highlight some of the historical decisions that led to creation (and, well, demise) of the file synchronization service originally known as Ubuntu One.

Note that Martin Albisetti has provided the architectural overview of the server side, so you may want to look at that too.

Originally, the file sync was not really a sync. Your files were supposed to be stored online and accessible over FUSE mount when needed. While this was very convenient in theory, the abstraction would shatter in case you become offline, or the process/kernel crashes. An intelligent caching scheme would have been required to mitigate the issue, but...

There are 2 hard problems in computer science: caching, naming, and off-by-1 errors

So, at some point this version of the client was scraped and a new project was born, codename "chicharra", also known as "syncdaemon".

Instead of using an off-the shelf protocol for file transfer like HTTP, a decision was made to create a brand new one, based on Google's Protocol Buffers as the payload format. It was called "storageprotocol". The server part called "updown" was listening on port 443, but it was not a HTTPS server. The custom protocol made it harder to support non-trivial networking setups, such as the ones involving proxies, and that took about 4 years to get implemented. The clients would also try to get the location of the updown node via DNS SRV requests before using the hardcoded default, and it turned out that SRV records were sometimes blocked by ISPs.

The files were compressed on the client and then the compressed blob would be stored on the server side. This has also turned out to be a disadvantage. Both music streaming application and wget downloading a public file could request the Range of the file, not the whole blob, requiring the file to be decompressed in a way that resembles the act of rewinding of a tape. Yes, even already compressed MP3 files were again compressed before sending to the server causing the noticeable CPU spikes during hashing/compression stage.

In order to prevent file upload if it already existed SyncDaemon would hash the file locally, send the hash to the server, and the server could immediately reply that the file was already there. Figuring out how we battled the "dropship"-like actions is left as an exercise for the reader.

Not all calls were implemented via storageprotocol though, some of the requests (such as publishing files, or getting quota for the UI) were still going through the HTTP servers and syncdaemon was just proxying the calls to endpoints specifically designed to support these features only.

Almost everything syncdaemon had to share with the world was exposed through DBus, which made it extremely easy to interface with. This enabled u1sdtool to control the service, Nautilus extension to show the emblems for files, Shutter (written in Perl!) was able to publish the screenshots to Ubuntu One. The Windows version could not use DBus, so it used twisted.spread based on loopback TCP socket instead.

The Ubuntu syncdaemon client part was quite usable, but when the time has come to create Android and iOS clients, the custom always-connected storageprotocol was not cooperating. There was a functional syncdaemon protocol implementation in Java to be used by Android, but it was fairly slow to perform the tasks the user wants to do on their phone, namely browse files, download and upload them. This required an actual REST API. Upon implementation, Android and iOS applications were released to a wide audience, and James Henstridge implemented a FTP proxy which would translate the FTP calls into Ubuntu One REST API.

Ubuntu Stable Release Updates were preventing the software from reaching the people right when it was ready, causing the team to support multiple software versions across multiple Ubuntu Releases with no way of merging the latest code into LTS releases. Dropbox on the other hand had a repository that supported multiple Ubuntu versions from the same code base.

On the other hand the server side was evolving rapidly. The original infrastructure was not able to support the growing user base of the service (after all, Ubuntu One was pre-installed in Ubuntu), and as Martin said, massive architecture changes were made to get it to work.


  • Inventing your own protocols is expensive and should generally be avoided.
  • Measure things before optimizing them.
  • Plan for public APIs beforehand.

So You Hired a Contractor

This post originally appeared on LinkedIn on July 1st.

Let's imagine you decided to hire a consultant company to enhance your product, because your team lacks the intricate knowledge of the technology.

The Management is happy to have the product shipped faster, with more features, and on time.

This is not how it works, sorry.

When you hire a software engineer, you need to teach them the processes your company follows for released software (oh, you don't have a defined process? I would not have high hopes for your product), the coding practices, and points of interaction with the other teams. Now, if you are bringing the contractor in, you are bringing their processes which may or may not be similar (oh, they don't have a defined process? Your product will not ship). This adaptation will take time. A lot of time.


Requirements: You

If you want your project to ship, you must already have the following items:

  • Infrastructure plan for the upcoming project. Don't expect the code to compile, package, deploy, and productionalize itself. You need infrastructure to do that. If you don't have that, you will need to learn, and for this to work...
  • All the documentation needs to be available in the form that is accessible to your team at all times. If you send Word documents back and forth, you are asking for a trouble. You will have diverging versions floating around, you will lose the changes and the dog will at some point eat somebody's homework. Confluence, GitHub, any other wiki-based system, or Google Docs, even SharePoint! If you are afraid that wiki is the place where the documentation goes to die, yes, it is that place, but for this brief amount of time your documentation is still alive, you will be able to reach it without building a document version control in your head as you go through hundreds of E-Mails. You can put your company logo on the front page later and make a PDF out of it. And since the best documentation is the code itself...
  • Once the coding phase starts, all the development work MUST be checked in to your version control system. Remember, the code does not exist until it is checked in. This not only increases visibility in the development process, it also allows the team members on your side to anticipate the infrastructure requirements (scripting, automation, building) that you will definitely not factor in during the setup phase. Now, the code will not be bug-free, so...
  • You MUST implement formal feedback mechanism for feature requirements/defect reporting. E-Mails will be lost and misinterpreted anyway. Everybody will be at loss at the current project status if you don't do that. And you don't want to tell the management you don't know when the project will be released, remember, you had contractors because you wanted to ship in time. Use the bug tracking system you already have. Please, don't turn it into an Excel spreadsheet that floats around. Make it possible for team members to provide feedback right where it is needed.

Requirements: They

You spend your company's money. You are in charge. The contractors you chose MUST implement the following:

  • Have clear installation instructions for all software they are going to install on your servers, if applicable. If the installation instruction involves putting stuff to /home, you already have incompetent people on board. If the installation instruction for the second server has "copy a random configuration directory from the first server", you have just violated the VCS requirement above. You cannot automate something you don't know, so an installation instruction is a MUST.
  • Agree to destroy the development environment every so often (once a week?). Since you already have all the installation automated in Chef/Puppet/Ansible (right?) and AMIs/OVFs regularly baked (if you are advanced enough), you will be able to recreate the development environment without any loss. Anything that is not automated will go away. Let everybody (your team and contractor's team) learn the lack of automation the hard way.
  • Be ready to commit the code to your repository. See the paragraph above for why it is important.
  • Be involved in the regular feedback loop. Do code reviews, test every commit. Unless you have a team of superhumans on your end, process resistance on contractors' end will negatively affect your own team. You don't want this to happen.
  • The contractors MUST be generally aware about the environment you are going to run the code in. If they randomly edit system configuration files on the server, change directory permissions and scatter leftover files across the whole filesystem, you must have hired developers that have no idea about system administration. Let them play on that development environment, but don't let them anywhere near your staging machines. Period. Make a new environment for integration, if you can.



You are happy with the integration on your development environment, and you are ready to move it to staging. And there are one or two defects that are kind of critical, but the deadline is looming. Once defects are fixed, the automation piece may need to be updated and it may take a day or two. So you think you can allow it to slip this one time and have a manual change in the server deployment and you will rectify this in production deployment...

Congratulations, you have just converted your staging environment into a developers playground. Expect the project to fail immediately. You need the staging environment to be as close to production as possible. Competent engineers would say that it must be identical to production, the software, the deployment, all the procedures. And you will lose that change, be it due to EC2 decommissioning the hardware or the DC unexpectedly catching on fire. You will be solely responsible for the production not having the fix.

Six month later than the deadline, your product finally ships. Your team knows all the negative sides of the technologies you used. You swear you will never hire contractors again.

The management has a brilliant idea of integrating your web interface with the newly released Widgets Incorporated Omnibus Service and gives you 3 months. Will you hire contractors again?

Trust, but verify

You can be fooled by the various certifications your contractors have. An advanced certificate in configuration of a Widgets Incorporated Frobnicator 3000 may not involve the knowledge of packaging software into reusable units, configuring logging rotation/sending them to the log server. It only means that once infrastructure is up and running, Frobnicator 3000 configuration will not be horrible.

Images borrowed from

OpenAM creates OpenDJ accounts you don't know about

OpenAM 12.0.1 was recently released (for subscribers only), which fixes this issue. See Issue #201505-05.

TL;DR: If you configure OpenDJ using OpenAM configurator (both on the web or configurator tool), or if you ask OpenAM to load the LDAP schema via Data Sources page on Web UI after installation, your OpenDJ installation will get provisioned with two users: cn=openssouser and cn=ldapuser with default hardcoded passwords.

You can find these entries in OpenAM-X.war/WEB-INF/template/ldif/opendj/opendj_userinit.ldif:

dn: cn=openssouser,ou=opensso adminusers,@userStoreRootSuffix@
objectclass: inetuser
objectclass: organizationalperson
objectclass: person
objectclass: top
cn: openssouser
sn: openssouser

dn: cn=ldapuser,ou=opensso adminusers,@userStoreRootSuffix@
objectclass: inetuser
objectclass: organizationalperson
objectclass: person
objectclass: top
cn: ldapuser
sn: ldapuser
userPassword: @LDAP_USER_PASSWD@

While ldapuser has limited access, cn=openssouser has the following ACI:

aci: (target="ldap:///@userStoreRootSuffix@")(targetattr="*")(version 3.0; acl
"OpenSSO datastore configuration bind  user all rights under the root suffix";
allow (all) userdn = "ldap:///cn=openssouser,ou=opensso adminusers,@userStoreRootSuffix@"; )

Which means that it can do whatever it wants (except with its own entry, there are additional ACIs later in that file).

If this does not make you nervous yet, look at the userPassword values. Yes, you are right. The default password for cn=openssouser is @OPENSSO_USER_PASSWD@. The default password for cn=ldapuser is @LDAP_USER_PASSSWD@.

This is vaguely described in OPENAM-1036, but it does not give much attention to the problem of exposed passwords. The templating mechanism does not change the values of these fields, so they are kept as is.

These users are left there from Sun OpenSSO configuration and ideally, they should not have migrated to OpenAM, since even OpenAM documentation hints on using cn=openam,ou=admins,$basedn in Preparing an Identity Repository.

cn=openssouser was meant to be the user for OpenSSO to bind as, instead of cn=Directory Manager, as described in Using OpenDS as user store for OpenSSO. The reasoning behind cn=ldapuser is not clear to me ("This user will have read access to the users entries, this will be used in the policy configuration and LDAP authentication configuration")

Quick Fix

You can see whether anybody was able to bind as these users by browsing the access logs of OpenDJ.

Disable these users if you know you are not using them. Go to OpenDJ machine, navigate to bin directory in OpenDJ installation and run:

./manage-account -h localhost -p 4444 -D "cn=directory manager" \
                 -w $directory_manager_password -X \
                 set-account-is-disabled --operationValue true \
                 --targetDN "cn=openssouser,ou=opensso adminusers,$basedn"

./manage-account -h localhost -p 4444 -D "cn=directory manager" \
                 -w $directory_manager_password -X \
                 set-account-is-disabled --operationValue true \
                 --targetDN "cn=ldapuser,ou=opensso adminusers,$basedn"

Deleting these users will help, but only until you re-upload LDAP schema at which point they will be re-created.

Long Term Fix

Remove/disable these users and then upgrade to OpenAM 12.0.1 so that they don't suddenly appear. If you can't, edit OpenAM-X.war/WEB-INF/template/ldif/opendj/opendj_userinit.ldif.

MediaFire + Duplicity = Backup


Your backups must always be encrypted. Period. Use the longest passphrase you can remember. Always assume the worst and don't trust anybody saying that your online file storage is perfectly secure. Nothing is perfectly secure.

MediaFire is an evolving project, so my statements may not be applicable, say, in 3 months. However, I suggest you to err on the side of caution.

Here are the things you need to be aware of upfront security-wise:

  • There is no second factor authentication yet for MediaFire web interface.
  • The sessions created from your username and password have a very long lifespan. It is not possible to destroy a session unless you change a password.
  • The web interface browser part can leak your v1 session since the file viewer (e.g. picture preview or video player) is forcing the connection to run over plain HTTP due to mixed content issues.

All python-mediafire-open-sdk calls are made through HTTPS, but if you are using MediaFire account in an untrusted environment of a coffee shop WiFi you will want to use a VPN.

My primary use case for this service is an encrypted off-site backup of my computers, so I found these risks to be acceptable.

Once upon a time

I was involved in Ubuntu One, and when the file synchronization project closed, I was left without a fallback procedure. I continued with local disk backups, searching for something that had:

  • an API which is easy to use,
  • a low price for 100GB of data,
  • an ability to publish files and folders to a wider internet if needed,
  • no requirement for a daemon running on my machine,
  • a first-party Android client.

Having considered quite a few possibilities, I ended up intrigued by MediaFire, partially because they had API, and they seemingly had a Linux client to upload things (which I was never able to download from their website), but there was not much integration with other software on my favorite platform. They had a first year promo price of $25/year, so I started playing with their API, "Coalmine" project was born, initially for Python 3.

When I got to the point of uploading a file through an API, I decided to upgrade to a paid account which does not expire.

I became a frequent visitor on MediaFire Development forums where I reported bugs and asked for documentation updates. I started adding tests to my "Coalmine" project and at some point I included a link to my implementation on the developer forum and got contacted by MediaFire representative asking whether it would be OK for them to copy the repository into their own space, granting me all the development rights.

That's when "Coalmine" became mediafire-python-open-sdk


Oh, duplicity? Right...


A solid backup strategy was required. I knew about all the hurdles of file synchronization firsthand, so I wanted a dedicated backup solution. Duplicity fits perfectly.

Now, as I said, there were no MediaFire modules for Duplicity, so I ported the code to python 2 with a couple of lines changed. I looked at the Backend class, and put the project aside, continuing to upload gpg-encrypted backups of tar archives.

A few weeks ago I finally felt compelled to do something about Duplicity and found that implementing backend is way easier than it looked.

And now I have another project, duplicity-mediafire:

It is just a backend that expects MEDIAFIRE_EMAIL, MEDIAFIRE_PASSWORD environment variables with MediaFire credentials.

It is not part of Duplicity and won't be proposed for inclusion until I am comfortable with the quality of my mediafire layer.

Follow README for installation instructions. I am using the project with duplicity 0.6.25, so it may fail for you if you are using a different version. Please let me know about this via project issues.

I put my credentials in duplicity wrapper for now, since dealing with keystores is yet another can of worms.


MEDIAFIRE_PASSWORD='this is a secret password'
PASSPHRASE='much secret, wow'

exec /usr/bin/duplicity "$@"

Now, I can run duplicity manually and even put it into cron:

$ duplicity full tmp/logs/ mf://Coalmine/Backup
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
--------------[ Backup Statistics ]--------------
StartTime 1430081835.40 (Sun Apr 26 16:57:15 2015)
EndTime 1430081835.48 (Sun Apr 26 16:57:15 2015)
ElapsedTime 0.08 (0.08 seconds)
SourceFiles 39
SourceFileSize 584742 (571 KB)
NewFiles 39
NewFileSize 584742 (571 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 39
RawDeltaSize 580646 (567 KB)
TotalDestinationSizeChange 376914 (368 KB)
Errors 0

$ duplicity list-current-files mf://Coalmine/Backup
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sun Apr 26 16:57:14 2015
Fri Apr 24 21:31:53 2015 .
Fri Apr 24 21:31:52 2015 access_log
Fri Apr 24 21:31:52 2015 access_log.20150407.bz2

$ duplicity mf://Coalmine/Backup /tmp/logs
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sun Apr 26 16:57:14 2015

$ ls -l /tmp/logs
total 660
-rw-r--r--. 1 user user   5474 Apr 24 21:31 access_log
-rw-r--r--. 1 user user  15906 Apr 24 21:31 access_log.20150407.bz2
-rw-r--r--. 1 user user  26885 Apr 24 21:31 access_log.20150408.bz2

If you want to know what's happening during upload behind the scenes, you can uncomment the block:

# import logging
# logging.basicConfig()
# logging.getLogger('mediafire.uploader').setLevel(logging.DEBUG)

This is an early version of the backend, so in case it takes too long to perform an upload and no network activity is seen, you may want to terminate the process and rerun the command.


This is a section in duplicity(1) where you need to look if you want to do a multiple directory backup.

Now, I've spent half an hour trying to find out how I can backup only the things I want, so here's how:

Create a duplicity.filelist e.g. in ~/.config, put the following there:

- /home/user/.config/libvirt
- **

Now run duplicity with --include-globbing-filelist ~/.config/duplicity.filelist in dry-run mode and you'll get something like this (note that output is fake, so don't try to match the numbers):

$ duplicity --dry-run --include-globbing-filelist ~/.config/duplicity.filelist \
     /home/user file://tmp/dry-run --no-encryption -v info
Using archive dir: /home/user/.cache/duplicity/f4f89c3e786d652ca77a73fbec1e2fea
Using backup name: f4f89c3e786d652ca77a73fbec1e2fea
Import of duplicity.backends.mediafirebackend Succeeded
Reading globbing filelist /home/user/.config/duplicity.filelist
A .
A .config
A .config/blah
A bin
A bin/duplicity
A Documents
A Documents/README
--------------[ Backup Statistics ]--------------
StartTime 1430102250.91 (Sun Apr 26 22:37:30 2015)
EndTime 1430102251.46 (Sun Apr 26 22:37:31 2015)
ElapsedTime 0.55 (0.55 seconds)
SourceFiles 6
SourceFileSize 1234 (1 MB)

- excludes the entry from file list, while simply listing the item includes it. This makes it possible to use whitelisting a few nodes instead of blacklisting a ton of unrelated files.

In my case I run:

$ duplicity --include-globbing-filelist ~/.config/duplicity.filelist /home/user mf://Coalmine/duplicity/

And this is how I do off-site backups.

P.S. If you wonder why I decided to use a passphrase instead of GPG key - I want a key to be memorizable and in case I lose access to my GPG key, I will still most likely be able to recover the files.

NIC.UA Under Attack

As of now, my primary hosting for this site is not accessible due to Ukrainian police action in data center.

According to twitter feed, the hosting servers are being seized allegedly due to separatist web sites hosting their domains at

Andrew Khvetkevich writes (translation mine):

The servers are seized because of separatist's domains. But we terminated them regularly! It does not make sense :( #nicua

This means that a lot of web sites will now be in limbo and if you are reading this, then my emergency hosting switch was successful.

Chef It Up: OpenDJ

Recently I had a pleasure of automating the installation of OpenDJ, a ForgeRock Directory (think LDAP) project. While I can't share the cookbook we ended up using, I can provide the most important part.


A project by ForgeRock implementing LDAPv3 server and client. This is a fork of abandoned Sun OpenDS.
A command line application allowing you to search LDAP directory.
A command line application that modifies the contents of LDAP directory record(s)


You all know LWRPs are good for you. Yet, lots of people writing Chef cookbooks tend to ignore that and come up with something like this:

execute 'run ldapmodify' do
    command <<-eos
        ldapmodify blah
        echo > /tmp/.ldapmodify-ran
    not_if { ::File.exist? '/tmp/.ldapmodify-ran' }

This is bad because...

  • In case you modify anything in the referenced command, your execute block will not run, because of the guard file.
  • Every time you need to add another ldapmodify command, you will need to come up with another guard name.
  • It enforces idempotency not based on the data, but the side effect. And when you converge the node for the second time after removing the data, you won't get it back.

At some point you will have something like this:


This does not scale well.

Well, initial configuration of OpenDJ really requires something like the execute block above - the installer can run only once, so there should be a guard on the configuration files it creates.

What to do then?

Add ldapmodify resource!

You will need to create a library and a resource/provider pair. The library will implement the ldapsearch shellout, while provider will simply call it. Having the code in a library allows you to turn that ugly execute block into a less ugly opendj_ldapmodify one:

opendj_ldapmodify '/path/to.ldif' do
    only_if { !ldapsearch('ou=something').include?('ou: something') }

ldapsearch returns error when it can't connect to the server/baseDN is wrong, yet it returns 0 when there are no results. That makes perfect sense, but since we need a flexible call, we simply analyze the output to see whether it contained the object we were looking for.

I don't consider myself to be a ruby developer, so take the code with a grain of salt:

module OpenDjCookbook
  module Helper
    include Chef::Mixin::ShellOut

    def ldapsearch(query)
      config = node['opendj']
      cmdline = [
        '-b', config['basedn'],
        '-h', config['host'],
        '-p', config['port'], # make sure this is a string
        '-D', config['userdn'],
        '-w', config['password'],

        shell_out!(cmdline, :user => config['service_user']).stdout.strip
      rescue Mixlib::ShellOut::ShellCommandFailed => e

-p argument needs to be a string because of mixlib-shellout#90, and yes, I am using password from a node attribute (use run_state instead!). In case of error we simply return an empty string to make not_if/only_if simpler.


ldapmodify provider is quite similar, we just shellout to ldapmodify. We assume the path to file will be the resource name, so we just do this:


action :run do
  path = new_resource.path

  execute "ldapmodify -f #{path}" do
    config = node['opendj']
    command [
      '-a', # default action: add
      '-h', config['host'],
      '-p', config['port'].to_s, # must be string
      '-D', config['userdn'],
      '-w', config['password'],
      '-f', path

    user config['service_user']


And resource is very simple:

actions: run
default_action :run

attribute :path, :name_attribute => true, :kind_of => String, :required => true

Hooking up a library to resource

You won't be able to use ldapsearch if you don't include the library into a resource, and I found that linking it in the recipe works for me, so...

# opendj/recipes/default.rb
Chef::Resource::OpendjLdapmodify.send(:include, OpendjCookbook::Helper)

# rest of the recipe


So now we can call ldapmodify from chef as a resource, and use ldapsearch to avoid modifying the data store when we don't need to.

OpenDJ is quite an interesting project, you may think that LDAP is dead, but it is alive and kicking. A lot of companies use ForgeRock solutions for identity and access management and the identity part most likely lives in a LDAP database. Apache Directory Project is another implementation of LDAPv3 server, an Eclipse-based Directory Studio for manipulating the data.

Force Ekiga 3.x Network Interface Setting

This post originally appeared here on 2009-07-19.

New Ekiga versions do not allow setting the network interface to send the requests. This is now controlled by to underlying OPAL library and Ekiga developer does not see any problems. The problems that are caused by sending REGISTERs on all the available interfaces hoping that at least one will make its way to the server.

This manifested as the following message during the registration:

Could not register(Timeout)

Now when more than one interface could reach the SIP server, such as virtualization ones enabled for forwarding - virbr, docker, ekiga fails with the following error:

Could not register (Globally not acceptable)

Bug 553595 – cannot connect to Asterisk server

Damien Sandras (ekiga developer):

We tend to automate things at maximum, so I don’t think it is possible.

Perhaps in 3.2 we will reintroduce a setting, but in that case if you listen only on one interface, there is no way to connect through a VPN and to the Internet at the same time. I’ll discuss with Robert, but I think the problem is on Asterisk side. One thing we could improve is that as soon we get an answer for one interface, we do not try others.

Basically, all what I needed to do is to expose one network interface to Ekiga so that it does not start sending bogus data through the other ones.

The answer is… ioctl wrapping done previously for my Motorola A1200 camera.

Sources here: ~rye/+junk/exposeif. Don’t forget to run ldconfig after installing.


$ bzr branch lp:~rye/+junk/exposeif
$ cd exposeif
$ make
$ sudo make install


/usr/local/bin/exposeif usage:

    /usr/local/bin/exposeif options program

    -i|--interfaces  List of interfaces to expose (comma delimited)
    -d|--debug       Debug output from
    -h|--help        This help

    /usr/local/bin/exposeif -i eth0,lo ekiga -d 4

This is not a production-level code. It may cause memory leaks within the application. As always, standard no liability disclaimer applies.

Moving stuff around

The site has moved to a new hosting provider and got a new domain name, but all the old content should still be available and redirected to new location when needed.

Gem rebuild gotcha

Production machines should never have anything compiled from source, and they should not have the tools to do that. Keeping that in mind I was packaging ruby gems using the Effing Package Management.

Usually I write rpm spec files manually when no existing ones fit our purposes, making sure the updates won't affect existing installation, however packaging 100+ rubygems was not the thing I would like to spend a day working on.

Enter fpm

$ gem fetch berkshelf
Fetching: berkshelf-3.1.5.gem (100%)
Downloaded berkshelf-3.1.5
$ fpm -s gem -t rpm berkshelf-3.1.5.gem
no value for epoch is set, defaulting to nil {:level=>:warn}
no value for epoch is set, defaulting to nil {:level=>:warn}
Created package {:path=>"rubygem-berkshelf-3.1.5-1.noarch.rpm"}

Great! Except that the thing does not build the dependencies, but it references them in Requires field:

$ rpm -qpR rubygem-berkshelf-3.1.5-1.noarch.rpm
rubygem(octokit) >= 3.0
rubygem(octokit) < 4.0
rubygem(celluloid) >= 0.16.0.pre
rubygem(celluloid) <
rubygem(celluloid-io) >= 0.16.0.pre
rubygem(celluloid-io) <

See that 0.16.0.pre version?

The Version field in the spec is where the maintainer should put the current version of the software being packaged. If the version is non-numeric (contains tags that are not numbers), you may need to include the additional non-numeric characters in the release field. -- Fedora Packaging Naming Guidelines

To make the story short, our berkshelf RPM will not be installable, celluloid RPM with version 0.16.0 will not satisfy 0.16.0.pre requirements.

A quick and dirty way of handling this would be to build celluloid RPM as is, but update berkshelf's gemspec to reference the version we can use.

Rebuilding gem

Should be as easy as gem unpack and gem build:

$ gem unpack berkshelf-3.1.5.gem
Unpacked gem: '/tmp/vendor/cache/berkshelf-3.1.5'
$ sed -i 's/0\.pre/0/' berkshelf-3.1.5/berkshelf.gemspec
$ gem build berkshelf-3.1.5/berkshelf.gemspec
fatal: Not a git repository (or any of the parent directories): .git
WARNING:  description and summary are identical
  Successfully built RubyGem
  Name: berkshelf
  Version: 3.1.5
  File: berkshelf-3.1.5.gem

Notice fatal: Not a git repository and look at the resulting gem:

$ ls -l berkshelf-3.1.5.gem
-rw-r--r-- 1 rye rye 4608 Oct 25 16:56 berkshelf-3.1.5.gem

The resulting gem is almost 5kiB, down from original 103K. Our gem is empty now.

Note: gem unpack --spec would produce yaml-formatted gemspec file which will not be accepted by gem build. fpm --no-gem-prerelease does not affect dependencies.

Enter git

Look at berkshelf.gemspec and notice that it uses git to provide the file listing:

  s.homepage                  = ''
  s.license                   = 'Apache 2.0'
  s.files                     = `git ls-files`.split($\)
  s.executables               = s.files.grep(%r{^bin/}).map{ |f| File.basename(f) }
  s.test_files                = s.files.grep(%r{^(test|spec|features)/})

That's where the 'fatal' message is coming from, and since that appears to be a recommended way of writing the gemfile, that's what makes our resulting gem file empty (error from git ls-files is silently ignored). It is expected that the gem will be always built from git repository, which is not true in our case.

Again, fixing it quick and dirty way - making unpackged gem folder a git repository:

$ git init berkshelf-3.1.5
Initialized empty Git repository in /tmp/vendor/cache/berkshelf-3.1.5/.git/
$ pushd berkshelf-3.1.5
$ git add .
$ git commit -m "Dummy commit"
$ gem build berkshelf.gemspec
WARNING:  description and summary are identical
  Successfully built RubyGem
  Name: berkshelf
  Version: 3.1.5
  File: berkshelf-3.1.5.gem
$ mv berkshelf-3.1.5.gem ../
$ popd
$ ls -l berkshelf-3.1.5.gem
-rw-r--r-- 1 rye rye 105472 Oct 25 17:10 berkshelf-3.1.5.gem

Much better.

Final build

$ fpm -s gem -t rpm berkshelf-3.1.5.gem
no value for epoch is set, defaulting to nil {:level=>:warn}
no value for epoch is set, defaulting to nil {:level=>:warn}
Created package {:path=>"rubygem-berkshelf-3.1.5-1.noarch.rpm"}
$ rpm -qpR rubygem-berkshelf-3.1.5-1.noarch.rpm
rubygem(celluloid) >= 0.16.0
rubygem(celluloid) < 0.17.0
rubygem(celluloid-io) >= 0.16.0
rubygem(celluloid-io) < 0.17.0

While the original issue may be seen as a bug in FPM (will update the post if/when GitHub issue is created for that), the dependency on git for file listing may cause a bit of confusion for an unsuspected developer/release engineer.