Key Pressure

MediaFire + Duplicity = Backup

Foreword

Your backups must always be encrypted. Period. Use the longest passphrase you can remember. Always assume the worst and don't trust anybody saying that your online file storage is perfectly secure. Nothing is perfectly secure.

MediaFire is an evolving project, so my statements may not be applicable, say, in 3 months. However, I suggest you to err on the side of caution.

Here are the things you need to be aware of upfront security-wise:

  • There is no second factor authentication yet for MediaFire web interface.
  • The sessions created from your username and password have a very long lifespan. It is not possible to destroy a session unless you change a password.
  • The web interface browser part can leak your v1 session since the file viewer (e.g. picture preview or video player) is forcing the connection to run over plain HTTP due to mixed content issues.

All python-mediafire-open-sdk calls are made through HTTPS, but if you are using MediaFire account in an untrusted environment of a coffee shop WiFi you will want to use a VPN.

My primary use case for this service is an encrypted off-site backup of my computers, so I found these risks to be acceptable.

Once upon a time

I was involved in Ubuntu One, and when the file synchronization project closed, I was left without a fallback procedure. I continued with local disk backups, searching for something that had:

  • an API which is easy to use,
  • a low price for 100GB of data,
  • an ability to publish files and folders to a wider internet if needed,
  • no requirement for a daemon running on my machine,
  • a first-party Android client.

Having considered quite a few possibilities, I ended up intrigued by MediaFire, partially because they had API, and they seemingly had a Linux client to upload things (which I was never able to download from their website), but there was not much integration with other software on my favorite platform. They had a first year promo price of $25/year, so I started playing with their API, "Coalmine" project was born, initially for Python 3.

When I got to the point of uploading a file through an API, I decided to upgrade to a paid account which does not expire.

I became a frequent visitor on MediaFire Development forums where I reported bugs and asked for documentation updates. I started adding tests to my "Coalmine" project and at some point I included a link to my implementation on the developer forum and got contacted by MediaFire representative asking whether it would be OK for them to copy the repository into their own space, granting me all the development rights.

That's when "Coalmine" became mediafire-python-open-sdk

...

Oh, duplicity? Right...

Duplicity

A solid backup strategy was required. I knew about all the hurdles of file synchronization firsthand, so I wanted a dedicated backup solution. Duplicity fits perfectly.

Now, as I said, there were no MediaFire modules for Duplicity, so I ported the code to python 2 with a couple of lines changed. I looked at the Backend class, and put the project aside, continuing to upload gpg-encrypted backups of tar archives.

A few weeks ago I finally felt compelled to do something about Duplicity and found that implementing backend is way easier than it looked.

And now I have another project, duplicity-mediafire:

It is just a backend that expects MEDIAFIRE_EMAIL, MEDIAFIRE_PASSWORD environment variables with MediaFire credentials.

It is not part of Duplicity and won't be proposed for inclusion until I am comfortable with the quality of my mediafire layer.

Follow README for installation instructions. I am using the project with duplicity 0.6.25, so it may fail for you if you are using a different version. Please let me know about this via project issues.

I put my credentials in duplicity wrapper for now, since dealing with keystores is yet another can of worms.

#!/bin/sh

MEDIAFIRE_EMAIL='mediafire@example.com'
MEDIAFIRE_PASSWORD='this is a secret password'
PASSPHRASE='much secret, wow'

exec /usr/bin/duplicity "$@"

Now, I can run duplicity manually and even put it into cron:

$ duplicity full tmp/logs/ mf://Coalmine/Backup
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
--------------[ Backup Statistics ]--------------
StartTime 1430081835.40 (Sun Apr 26 16:57:15 2015)
EndTime 1430081835.48 (Sun Apr 26 16:57:15 2015)
ElapsedTime 0.08 (0.08 seconds)
SourceFiles 39
SourceFileSize 584742 (571 KB)
NewFiles 39
NewFileSize 584742 (571 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 39
RawDeltaSize 580646 (567 KB)
TotalDestinationSizeChange 376914 (368 KB)
Errors 0
-------------------------------------------------

$ duplicity list-current-files mf://Coalmine/Backup
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sun Apr 26 16:57:14 2015
Fri Apr 24 21:31:53 2015 .
Fri Apr 24 21:31:52 2015 access_log
Fri Apr 24 21:31:52 2015 access_log.20150407.bz2
...

$ duplicity mf://Coalmine/Backup /tmp/logs
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Sun Apr 26 16:57:14 2015

$ ls -l /tmp/logs
total 660
-rw-r--r--. 1 user user   5474 Apr 24 21:31 access_log
-rw-r--r--. 1 user user  15906 Apr 24 21:31 access_log.20150407.bz2
-rw-r--r--. 1 user user  26885 Apr 24 21:31 access_log.20150408.bz2
...
/galleries/dropbox/mediafire-duplicity.png

If you want to know what's happening during upload behind the scenes, you can uncomment the block:

# import logging
#
# logging.basicConfig()
# logging.getLogger('mediafire.uploader').setLevel(logging.DEBUG)

This is an early version of the backend, so in case it takes too long to perform an upload and no network activity is seen, you may want to terminate the process and rerun the command.

FILE SELECTION

This is a section in duplicity(1) where you need to look if you want to do a multiple directory backup.

Now, I've spent half an hour trying to find out how I can backup only the things I want, so here's how:

Create a duplicity.filelist e.g. in ~/.config, put the following there:

/home/user/bin
/home/user/Documents
- /home/user/.config/libvirt
/home/user/.config
- **

Now run duplicity with --include-globbing-filelist ~/.config/duplicity.filelist in dry-run mode and you'll get something like this (note that output is fake, so don't try to match the numbers):

$ duplicity --dry-run --include-globbing-filelist ~/.config/duplicity.filelist \
     /home/user file://tmp/dry-run --no-encryption -v info
Using archive dir: /home/user/.cache/duplicity/f4f89c3e786d652ca77a73fbec1e2fea
Using backup name: f4f89c3e786d652ca77a73fbec1e2fea
...
Import of duplicity.backends.mediafirebackend Succeeded
...
Reading globbing filelist /home/user/.config/duplicity.filelist
...
A .
A .config
A .config/blah
A bin
A bin/duplicity
A Documents
A Documents/README
--------------[ Backup Statistics ]--------------
StartTime 1430102250.91 (Sun Apr 26 22:37:30 2015)
EndTime 1430102251.46 (Sun Apr 26 22:37:31 2015)
ElapsedTime 0.55 (0.55 seconds)
SourceFiles 6
SourceFileSize 1234 (1 MB)
...

- excludes the entry from file list, while simply listing the item includes it. This makes it possible to use whitelisting a few nodes instead of blacklisting a ton of unrelated files.

In my case I run:

$ duplicity --include-globbing-filelist ~/.config/duplicity.filelist /home/user mf://Coalmine/duplicity/

And this is how I do off-site backups.

P.S. If you wonder why I decided to use a passphrase instead of GPG key - I want a key to be memorizable and in case I lose access to my GPG key, I will still most likely be able to recover the files.


NIC.UA Under Attack

As of now, my primary hosting for this site is not accessible due to Ukrainian police action in nic.ua data center.

According to nic.ua twitter feed, the hosting servers are being seized allegedly due to separatist web sites hosting their domains at nic.ua.

Andrew Khvetkevich writes (translation mine):

The servers are seized because of separatist's domains. But we terminated them regularly! It does not make sense :( #nicua

This means that a lot of web sites will now be in limbo and if you are reading this, then my emergency hosting switch was successful.


Chef It Up: OpenDJ

Recently I had a pleasure of automating the installation of OpenDJ, a ForgeRock Directory (think LDAP) project. While I can't share the cookbook we ended up using, I can provide the most important part.

Glossary

OpenDJ
A project by ForgeRock implementing LDAPv3 server and client. This is a fork of abandoned Sun OpenDS.
ldapsearch
A command line application allowing you to search LDAP directory.
ldapmodify
A command line application that modifies the contents of LDAP directory record(s)

LWRP

You all know LWRPs are good for you. Yet, lots of people writing Chef cookbooks tend to ignore that and come up with something like this:

execute 'run ldapmodify' do
    command <<-eos
        ldapmodify blah
        echo > /tmp/.ldapmodify-ran
    eos
    not_if { ::File.exist? '/tmp/.ldapmodify-ran' }
end

This is bad because...

  • In case you modify anything in the referenced command, your execute block will not run, because of the guard file.
  • Every time you need to add another ldapmodify command, you will need to come up with another guard name.
  • It enforces idempotency not based on the data, but the side effect. And when you converge the node for the second time after removing the data, you won't get it back.

At some point you will have something like this:

/tmp/.ldapmodify-ran
/tmp/.ldapmodify-update1
/tmp/.ldapmodify-squirrel

This does not scale well.

Well, initial configuration of OpenDJ really requires something like the execute block above - the installer can run only once, so there should be a guard on the configuration files it creates.

What to do then?

Add ldapmodify resource!

You will need to create a library and a resource/provider pair. The library will implement the ldapsearch shellout, while provider will simply call it. Having the code in a library allows you to turn that ugly execute block into a less ugly opendj_ldapmodify one:

opendj_ldapmodify '/path/to.ldif' do
    only_if { !ldapsearch('ou=something').include?('ou: something') }
end

ldapsearch returns error when it can't connect to the server/baseDN is wrong, yet it returns 0 when there are no results. That makes perfect sense, but since we need a flexible call, we simply analyze the output to see whether it contained the object we were looking for.

I don't consider myself to be a ruby developer, so take the code with a grain of salt:

module OpenDjCookbook
  module Helper
    include Chef::Mixin::ShellOut

    def ldapsearch(query)
      config = node['opendj']
      cmdline = [
        '/path/to/ldapsearch',
        '-b', config['basedn'],
        '-h', config['host'],
        '-p', config['port'], # make sure this is a string
        '-D', config['userdn'],
        '-w', config['password'],
        query
      ]

      begin
        shell_out!(cmdline, :user => config['service_user']).stdout.strip
      rescue Mixlib::ShellOut::ShellCommandFailed => e
        ''
      end
    end
  end
end

-p argument needs to be a string because of mixlib-shellout#90, and yes, I am using password from a node attribute (use run_state instead!). In case of error we simply return an empty string to make not_if/only_if simpler.

Provider

ldapmodify provider is quite similar, we just shellout to ldapmodify. We assume the path to file will be the resource name, so we just do this:

use_inline_resource

action :run do
  path = new_resource.path

  execute "ldapmodify -f #{path}" do
    config = node['opendj']
    command [
      '/path/to/ldapmodify',
      '-a', # default action: add
      '-h', config['host'],
      '-p', config['port'].to_s, # must be string
      '-D', config['userdn'],
      '-w', config['password'],
      '-f', path

    ]
    user config['service_user']
  end
end

Resource

And resource is very simple:

actions: run
default_action :run

attribute :path, :name_attribute => true, :kind_of => String, :required => true

Hooking up a library to resource

You won't be able to use ldapsearch if you don't include the library into a resource, and I found that linking it in the recipe works for me, so...

# opendj/recipes/default.rb
Chef::Resource::OpendjLdapmodify.send(:include, OpendjCookbook::Helper)

# rest of the recipe

Done

So now we can call ldapmodify from chef as a resource, and use ldapsearch to avoid modifying the data store when we don't need to.

OpenDJ is quite an interesting project, you may think that LDAP is dead, but it is alive and kicking. A lot of companies use ForgeRock solutions for identity and access management and the identity part most likely lives in a LDAP database. Apache Directory Project is another implementation of LDAPv3 server, an Eclipse-based Directory Studio for manipulating the data.


Force Ekiga 3.x Network Interface Setting

This post originally appeared here on 2009-07-19.

New Ekiga versions do not allow setting the network interface to send the requests. This is now controlled by to underlying OPAL library and Ekiga developer does not see any problems. The problems that are caused by sending REGISTERs on all the available interfaces hoping that at least one will make its way to the server.

This manifested as the following message during the registration:

Could not register(Timeout)

Now when more than one interface could reach the SIP server, such as virtualization ones enabled for forwarding - virbr, docker, ekiga fails with the following error:

Could not register (Globally not acceptable)

Bug 553595 – cannot connect to Asterisk server

Damien Sandras (ekiga developer):

We tend to automate things at maximum, so I don’t think it is possible.

Perhaps in 3.2 we will reintroduce a setting, but in that case if you listen only on one interface, there is no way to connect through a VPN and to the Internet at the same time. I’ll discuss with Robert, but I think the problem is on Asterisk side. One thing we could improve is that as soon we get an answer for one interface, we do not try others.

Basically, all what I needed to do is to expose one network interface to Ekiga so that it does not start sending bogus data through the other ones.

The answer is… ioctl wrapping done previously for my Motorola A1200 camera.

Sources here: ~rye/+junk/exposeif. Don’t forget to run ldconfig after installing.

Installing

$ bzr branch lp:~rye/+junk/exposeif
$ cd exposeif
$ make
$ sudo make install

Usage

/usr/local/bin/exposeif usage:

    /usr/local/bin/exposeif options program

Options:
    -i|--interfaces  List of interfaces to expose (comma delimited)
    -d|--debug       Debug output from libexposeif.so
    -h|--help        This help

Example:
    /usr/local/bin/exposeif -i eth0,lo ekiga -d 4

This is not a production-level code. It may cause memory leaks within the application. As always, standard no liability disclaimer applies.


Moving stuff around

The site has moved to a new hosting provider and got a new domain name, but all the old content should still be available and redirected to new location when needed.


Gem rebuild gotcha

Production machines should never have anything compiled from source, and they should not have the tools to do that. Keeping that in mind I was packaging ruby gems using the Effing Package Management.

Usually I write rpm spec files manually when no existing ones fit our purposes, making sure the updates won't affect existing installation, however packaging 100+ rubygems was not the thing I would like to spend a day working on.

Enter fpm

$ gem fetch berkshelf
Fetching: berkshelf-3.1.5.gem (100%)
Downloaded berkshelf-3.1.5
$ fpm -s gem -t rpm berkshelf-3.1.5.gem
no value for epoch is set, defaulting to nil {:level=>:warn}
no value for epoch is set, defaulting to nil {:level=>:warn}
Created package {:path=>"rubygem-berkshelf-3.1.5-1.noarch.rpm"}

Great! Except that the thing does not build the dependencies, but it references them in Requires field:

$ rpm -qpR rubygem-berkshelf-3.1.5-1.noarch.rpm
...
rubygem(octokit) >= 3.0
rubygem(octokit) < 4.0
rubygem(celluloid) >= 0.16.0.pre
rubygem(celluloid) < 0.16.1.0
rubygem(celluloid-io) >= 0.16.0.pre
rubygem(celluloid-io) < 0.16.1.0
...

See that 0.16.0.pre version?

The Version field in the spec is where the maintainer should put the current version of the software being packaged. If the version is non-numeric (contains tags that are not numbers), you may need to include the additional non-numeric characters in the release field. -- Fedora Packaging Naming Guidelines

To make the story short, our berkshelf RPM will not be installable, celluloid RPM with version 0.16.0 will not satisfy 0.16.0.pre requirements.

A quick and dirty way of handling this would be to build celluloid RPM as is, but update berkshelf's gemspec to reference the version we can use.

Rebuilding gem

Should be as easy as gem unpack and gem build:

$ gem unpack berkshelf-3.1.5.gem
Unpacked gem: '/tmp/vendor/cache/berkshelf-3.1.5'
$ sed -i 's/0\.pre/0/' berkshelf-3.1.5/berkshelf.gemspec
$ gem build berkshelf-3.1.5/berkshelf.gemspec
fatal: Not a git repository (or any of the parent directories): .git
WARNING:  description and summary are identical
  Successfully built RubyGem
  Name: berkshelf
  Version: 3.1.5
  File: berkshelf-3.1.5.gem

Notice fatal: Not a git repository and look at the resulting gem:

$ ls -l berkshelf-3.1.5.gem
-rw-r--r-- 1 rye rye 4608 Oct 25 16:56 berkshelf-3.1.5.gem

The resulting gem is almost 5kiB, down from original 103K. Our gem is empty now.

Note: gem unpack --spec would produce yaml-formatted gemspec file which will not be accepted by gem build. fpm --no-gem-prerelease does not affect dependencies.

Enter git

Look at berkshelf.gemspec and notice that it uses git to provide the file listing:

...
  s.homepage                  = 'http://berkshelf.com'
  s.license                   = 'Apache 2.0'
  s.files                     = `git ls-files`.split($\)
  s.executables               = s.files.grep(%r{^bin/}).map{ |f| File.basename(f) }
  s.test_files                = s.files.grep(%r{^(test|spec|features)/})
...

That's where the 'fatal' message is coming from, and since that appears to be a recommended way of writing the gemfile, that's what makes our resulting gem file empty (error from git ls-files is silently ignored). It is expected that the gem will be always built from git repository, which is not true in our case.

Again, fixing it quick and dirty way - making unpackged gem folder a git repository:

$ git init berkshelf-3.1.5
Initialized empty Git repository in /tmp/vendor/cache/berkshelf-3.1.5/.git/
$ pushd berkshelf-3.1.5
$ git add .
$ git commit -m "Dummy commit"
$ gem build berkshelf.gemspec
WARNING:  description and summary are identical
  Successfully built RubyGem
  Name: berkshelf
  Version: 3.1.5
  File: berkshelf-3.1.5.gem
$ mv berkshelf-3.1.5.gem ../
$ popd
$ ls -l berkshelf-3.1.5.gem
-rw-r--r-- 1 rye rye 105472 Oct 25 17:10 berkshelf-3.1.5.gem

Much better.

Final build

$ fpm -s gem -t rpm berkshelf-3.1.5.gem
no value for epoch is set, defaulting to nil {:level=>:warn}
no value for epoch is set, defaulting to nil {:level=>:warn}
Created package {:path=>"rubygem-berkshelf-3.1.5-1.noarch.rpm"}
$ rpm -qpR rubygem-berkshelf-3.1.5-1.noarch.rpm
...
rubygem(celluloid) >= 0.16.0
rubygem(celluloid) < 0.17.0
rubygem(celluloid-io) >= 0.16.0
rubygem(celluloid-io) < 0.17.0
...

While the original issue may be seen as a bug in FPM (will update the post if/when GitHub issue is created for that), the dependency on git for file listing may cause a bit of confusion for an unsuspected developer/release engineer.


Jenkins System Properties

I started doing much more work with jenkins lately and experiencing the void of solutions to some issues I am facing, I decided to start posting them here.

So today we are going to load some system properties on jenkins startup.

Jenkins allows Groovy hook scripts to be set up that are run early during startup or if jenkins experiences boot failure. Since these scripts use the same JVM as jenkins, we can set up a script that set up system properties directly or load from file.

Setup is simple, put jenkins.properties to your $JENKINS_HOME and create init.groovy.d there too. Put the following groovy file under init.groovy.d:

load-properties.groovy

import jenkins.model.Jenkins
import java.util.logging.LogManager

def logger = LogManager.getLogManager().getLogger("")

/* JENKINS_HOME environment variable is not reliable */
def jenkinsHome = Jenkins.instance.getRootDir().absolutePath

def propertiesFile = new File("${jenkinsHome}/jenkins.properties")

if (propertiesFile.exists()) {
    logger.info("Loading system properties from ${propertiesFile.absolutePath}")
    propertiesFile.withReader { r ->
        /* Loading java.util.Properties as defaults makes empty Properties object */
        def props = new Properties()
        props.load(r)
        props.each { key, value ->
            System.setProperty(key, value)
        }
    }
}

Now restart jenkins and observe the following output:

Sep 26, 2014 9:59:17 PM jenkins.InitReactorRunner$1 onAttained
INFO: Augmented all extensions
Sep 26, 2014 9:59:20 PM jenkins.InitReactorRunner$1 onAttained
INFO: Loaded all jobs
Sep 26, 2014 9:59:20 PM jenkins.util.groovy.GroovyHookScript execute
INFO: Executing /home/rye/.jenkins/init.groovy.d/load-properties.groovy
Sep 26, 2014 9:59:20 PM org.jenkinsci.main.modules.sshd.SSHD start
INFO: Started SSHD at port 48042
Sep 26, 2014 9:59:20 PM java.util.logging.LogManager$RootLogger log
INFO: Loading system properties from /home/rye/.jenkins/jenkins.properties
Sep 26, 2014 9:59:20 PM jenkins.InitReactorRunner$1 onAttained
INFO: Completed initialization

Visit $JENKINS_URL/systemInfo (e.g. http://localhost:8080/systemInfo) and see your system property defined.

I needed this because the certificate I got from StartSSL was not trusted by JVM by default, so I had to override trustStore by creating a new keystore ($JENKINS_HOME/.keystore), importing StartSSL Class 1 certificate, and set javax.net.ssl.trustStore=/var/lib/jenkins/.keystore system property.


Sad Acer A1 status update

This January after 3 years of horrible performance of buggy hardware and software my Acer Liquid E USB port has partially detached from its motherboard which prevented the device from being charged and accessed over USB.

Replacing the motherboard makes no sense since it's equivalent to buying another horrible broken Acer Liquid E device. Replacing 10 pin MiniUSB requires both compatible part CN 10PIN 215+916+2450 ACT and precision tools I don't have.

This brings the end to my attempts of fixing the thing that was not supposed to be broken from the start.

Acer Liquid E idle at 39.2 ℃

I've learned a lot about Android internals and kernel development. It inspired me to dig deeper and even join Samsung R&D Ukraine briefly to study embedded development, which made me realize that supporting a device without manufacturer assistance is an unthankful job.

The devices get released at an increased rate and deprecation of components brings the cost of support for existing devices prohibitively high. That means there are two purchasing options now - either an undertested or obsolete device.

Sad but true.


Shrinking Oversized Images in Liferea

Update (2015-05-02): Fixed the icon shrinking, also uploaded my entire stylesheet.

I started using liferea quite a while ago for RSS feed and the only issue I encountered was the way the images were displayed if the image was larger than the window:

Liferea display before the change

By adding the following rule to ~/.config/liferea/liferea.css I got all images resized to fit the window and browsing large photos in the feeds is no longer an issue.

div.content img {
    max-width: 100%;
    height: auto;
}
Liferea display after the change


Working With L-2 Status

Please note that I can't be held liable for any errors or omissions in the document and you will need to seek real immigration lawyer should you have any questions not answered on the USCIS/SSA websites.

TL;DR version: get EAD then SSN and that's all the documents you need to start working.

L-2 is a visa type/status issued to "a spouse of intra-company transferee". Regardless of what you read elsewhere you are NOT eligible to work anywhere unless you receive an Employment Authorization Document.

For some weird reason, SSA and DHS disagree on whether L-2 holders are eligible to work w/o EAD:

USCIS/DHS:

Spouses of L-1 workers may apply for work authorization by filing a Form I-765, Application for Employment Authorization with fee. If approved, there is no specific restriction as to where the L-2 spouse may work.

SSA:

For COAs displaying a double asterisk (**) (non-immigrant E-1, E-2, and L-2 classifications), the spouse is also authorized to work without specific DHS authorization.

A person admitted under L-2 can obtain SSN w/o EAD, the list of documents is:

  • an EAD (Form I-766) showing “A-18” under Category; or
  • evidence other than an EAD that proves the L-2’s lawful alien status (e.g., I-94) and a marriage document as evidence that he or she is the spouse of the principal L-1 alien.

However, in my case I was specifically asked for an EAD while submitting the documents for SSN, stating that otherwise I would not be eligible for receiving SSN, your mileage may vary.

Based on the DHS documents, your employer will want to see the work authorization and the I-9 form states that SSN showing "ALLOWED TO WORK ONLY WITH DHS AUTHORIZATION" is not enough.

Getting EAD

Employment Authorization Document is issued by Department of Homeland Security. While you can eFile the documents, you will still need to provide a payment (personal check or money order) and photos, so you will need to get these to the USCIS (U.S. Citizenship and Immigration Services) and the easiest way is to mail everything using the post office.

You will need to figure out where you need to send the documents based on the state you are currently in, see the instructions for your I-765 form.

Double and triple-check the contents of your mail prior to posting. Create a checklist and tick it off as you put the items into your mail, you really want to get everything done properly from the first time.

You will want to get a tracking number, so that you will be able to check the status of the message as it is traveling through the US to the lockbox facility.

Once USCIS receives your mail and checks that it is intact, you will receive a mail message, Form I-797C, Notice of Action, where you will see the current status of your case and the receipt number.

Now go to USCIS, create an account and register for e-mail notification as your case passes various stages.

When your application is approved, DHS will send you another I-797 Notice of Action, this time on a fancy watermarked paper stating that your card is being sent to you.

Here's how the card looks:

/galleries/dropbox/ead.jpg

If you signed up for e-mail notifications on the case progress, you will also receive an e-mail containing the USPS tracking number, however in my case there appears to be a template processing bug, and I was left with "#DCN" as the tracking number. I have notified the webmaster about the issue, but haven't heard anything back yet.

Change of address

EAD processing may take up to 90 days, if you happen to relocate while you are waiting for the document to arrive, you can update the address online, specifying the receipt number you received in the first "Notice of Action". When DHS updates your information they will send you a paper mail notification about the same.

Social Security Number

Once you receive your EAD, you can go to your local Social Security Administration office and apply for a Social Security Number.

You will need to provide:

  • A completed SS-5 form.
  • Your foreign passport with the L-2 visa.
  • Employment Authorization Card.

You will be given a receipt showing your name, mail address, and application date. The social security card should arrive to your mailbox within 2 weeks.

First day

As expected, on the first day of work I needed to provide my Passport and EAD. Employment quest is finished, no issues detected.


  • Shares: