Skip to main content

Building OpenWRT with OpenL2TP

This is just a quick dump of the process of building current git of OpenWRT for OpenL2TP. Complete blog post about OpenL2TP on OpenWRT 2.4 will be available later.

Kernel

Patched pppol2tp.c, removed NULL assignment to nonexistent iptables structure.

After reflash:

root@OpenWrt:/lib/modules/2.4.37.5# insmod pppol2tp
insmod: unresolved symbol udp_prot

udp_prot is exported only when

#if defined (CONFIG_IPV6_MODULE)
  || defined (CONFIG_KHTTPD)
  || defined (CONFIG_KHTTPD_MODULE)
  || defined (CONFIG_IP_SCTP_MODULE)

Selected CONFIG_IPV6_MODULE

IPV6 Module is not installed then, but symbol is exported.

root@OpenWrt:/etc/init.d# openl2tpd -R -f -D
Start, trace_flags=00000000 (debug enabled)
OpenL2TP V1.6, (c) Copyright 2004,2005,2006,2007,2008 Katalix Systems Ltd.
Loading plugin /usr/lib/openl2tp/ppp_unix.so, version V1.5
no entry for l2tp in /etc/services and -u not used
Cleaning up before exiting
Unloading plugin /usr/lib/openl2tp/ppp_unix.so

So, either /etc/services need to be tweaked (HEY, they are empty)! or -u 1701 must be given.

Disable httpd service:

/etc/init.d/httpd disable

Debugging OpenL2TP:

l2tp> tunnel create dest_ipaddr=10.0.0.24 tunnel_name=vpn2
l2tp> session create tunnel_name=vpn2 user_name=xxx@internet.beeline.ua user_password=yyy

pppd 2.4.4 started as

pppd debug kdebug 7 noipdefault sync nodetach user xxx@internet.beeline.ua \
password yyy local noauth noaccomp nopcomp nobsdcomp \
nodeflate nopredictor1 novj novjccomp noendpoint nomp noproxyarp \
plugin pppol2tp.so plugin openl2tp.so pppol2tp 18 pppol2tp_tunnel_id 57957 \
pppol2tp_session_id 9843 pppol2tp_debug_mask 15

dies with

: unrecognized option 'nomp'

Reason: openwrt/package/ppp/patches/200-makefile.patch sets HAVE_MULTILINK disabled despite:

# Linux distributions: Please leave multilink ENABLED in your builds

noendpoint is supported, so we remove nomp thing

: Plugin pppol2tp.so is for pppd version 2.4.5, this is 2.4.4

Reason: INCLUDE_DIR is set in OpenL2TP and it points to system include dir

Changing the Makefile to override INCLUDE DIR as well in openwrt/feeds/packages/net/openl2tp/Makefile

ARGHHHH!!!:

Plugin pppol2tp.so loaded.
Plugin openl2tp.so loaded.
PPPoL2TP kernel driver not installed

Reason: incorrect #define for 2.4 kernel

Session is now OK

See also:

Ubuntu 9.10 Running in UML

/galleries/dropbox/ubuntu-inside-ubuntu.png

I wanted to do this a long ago, but a lot of things prevented me from doing it. Yesterday I had many many kernel compile sessions, image restore and reconfiguration. But all of them ended in disk corruption during ubuntu-desktop install in UML (chrooted env was OK).

So today I got the default kernel config from my Ubuntu 9.10, made some nice changes (see below), had a nice chat with GDM (does not like me so far) and...

/galleries/dropbox/karmig-uml.thumbnail.png

This is Gnome session from Karmic 9.10 running in UML connected to separate X server running at host computer. Unfortunately, Xnest does not work in such conditions generously exiting on

X Error of failed request:  BadDrawable (invalid Pixmap or Window parameter)
Major opcode of failed request:  70 (X_PolyFillRectangle)
Resource id in failed request:  0x0
Serial number of failed request:  51056
Current serial number in output stream:  51056

The nice changes included: Virtual block device was not compiled in by default, right, no support for virtual hard drive - CONFIG_BLK_DEV_UBD :). Additionally, MAKE SURE that you have CONFIG_HIGHMEM disabled - will not compile otherwise. CONFIG_BLK_DEV_UBD_SYNC MUST BE enabled, otherwise you will get the same filesystem corruption I tried to battle for several hours.

You might also need a set of patches for various compile-time errors. I have no idea why they were not included in the tree, but... The actual set of the links to patches will be provided here once I get to recompile the vanilla kernel again with a customized config to exclude the hardware that cannot be physically present in UML instance.

So far I am happy.

Telepathy logging to CouchDB

CouchDB is no longer used by Ubuntu One, so this post only has a tiny amount of historical value.

A picture is worth a thousand words so here is the picture:

/galleries/dropbox/Couchdb-Logging.thumbnail.png

And here are those thousand words:

{
"_id": "e5f826969abc88da927500f018d5acf3",
"_rev": "1-aedda30270d6a77eea4901e35d9209b3",
"record_type": "http://www.rtg.in.ua/empathy-im-couchdb",
"to": "elfy.ua@gmail.com",
"message": "Please say something for the world - testing CouchDB logging.",
"from": "roman.yepishev@gmail.com",
"time": "2009-12-01T18:58:18"
}
{
"_id": "4f043681a4c11956c592916ea5f6a7ac",
"_rev": "1-630942a3af4fbd54af0e66a47a1c8ed7",
"record_type": "http://www.rtg.in.ua/empathy-im-couchdb",
"to": "roman.yepishev@gmail.com",
"message": "Hello",
"from": "elfy.ua@gmail.com",
"time": "2009-12-01T18:58:28"
}
{
"_id": "483c5f7649caefb81ce49460a21fe20a",
"_rev": "1-d18e6e8170a01e46c82ace0f196ebca0",
"record_type": "http://www.rtg.in.ua/empathy-im-couchdb",
"to": "elfy.ua@gmail.com",
"message": "Ok, this is sufficient :)",
"from": "roman.yepishev@gmail.com",
"time": "2009-12-01T18:58:37"
}
/galleries/dropbox/telepathy-u1.png

Please note that mission-control-5 shipped in Karmic has a bug preventing Observers from seeing the conversation originated by local users. The fix is available only in their PPA at the moment, ppa:telepathy/ppa if you are running Karmic version of 'Software Sources'.

See also

Ubuntu One

/galleries/dropbox/u1-64x64.png

It has been a while since I said something © Loud Howard, Dilbert

As you may know, Canonical has started their sync service called Ubuntu One which provides file, contacts and notes synchronization across different computers. This is what was keeping me busy with random thoughts about possible application of such service.

First of all, I started learning Python. I’ve been using Perl since 2004 and this is pretty much the only scripting language I know (bash does not count :)). So I decided that I could both learn python and contribute somehow to Ubuntu One community. This is how it all started.

Here are some Ubuntu One internals:

Clients

There are three “real” ubuntuone clients, syncdaemon which synchronizes your ~/Ubuntu One folder, desktopcouch, which is a wrapper for CouchDB that configures CouchDB replication to UbuntuOne servers and Tomboy which uses Snowy protocol to access note sync service. All other clients (Evolution-couchdb and Bindwood) are simply interfacing with the local CouchDB instance and therefore do not connect to U1 server directly.

syncdaemon is a daemon written in Python that listens to filesystem changes and reacts accordingly. It communicates with U1 server using protobuf, Protocol Buffers developed by Google. It can be controlled to some extent via u1sdtool and ubuntuone-client-applet which is providing the “Cloud icon” applet in the notification area.

desktopcouch is a project that aims to provide "A CouchDB on every desktop, and the code to help it happen." Basically, this is CouchDB wrapped into python startup scripts and desktopcouch.records python library that handles authentication, JSON internals and other actions that should follow the specification.

Tomboy is a note-taking application written in C# which is now a part of GNOME project. Tomboy developers created their own server-side application called Snowy. Ubuntu One servers support the protocol used by Snowy therefore Tomboy can sync with their service as well.

The Server

Ubuntu One storage servers are located within Amazon S3 cloud. There are two types of subscriptions – the free one, that gives you 2Gb of space within U1 cloud and paid one – you will get 50Gb of space for $10 a month. Since Amazon charges for the storage anyway, the “free” storage plans are covered by Canonical.

Q: Okay, this is all fine but how do I relate to all this?

A: I like to debug things, search for solutions and I like new technologies. I have analyzed a lot of U1 client code so that I could write some myself, like the script that shows what local files are still not synced to the cloud. Or the diagnostic script for common Ubuntu One issues. Since everything is written in real-world python and the developers are usually available for discussions in #ubuntuone channel on FreeNode, I consider this to be an invaluable source of knowledge.

What can be synchronized?

The following items are supported at the moment:

  • Files and folders within ~/Ubuntu One folder

  • Evolution Contacts

  • Tomboy notes

  • Firefox bookmarks

However, since the service is pretty new there are some issues with the sync. The most common bugs are described in UbuntuOne wiki. Some of the fixes are available only in respective PPAs, such as ppa:ubuntuone/beta and they are heading towards Karmic update which takes time.

Any plans?

Canonical is planning to do phone sync as well based on Funambol. Additionally everything that is put to CouchDB is replicated to U1 servers…

One of my planned projects is to make Empathy (and every client built on top of Telepathy framework) store chat logs to CouchDB server. This will allow for complete chat history to be available on all machines even if the server does not support log storage. The code is almost there, but it is not ready to be released basically because this was my first exercise in Python and it will be simpler to rewrite it from scratch :)

Update: See the next post about Telepathy logging to CouchDB.

In case you have any suggestions regarding this post or you want to know more about Ubuntu One, feel free to post questions here and visit #ubuntuone channel on irc.freenode.net where I am almost always available as rtgz.

Pastebinit defaults

There is a tiny app called pastebinit that posts any info you provide to various pastebin services, such as paste.ubuntu.com, pastebin.com, etc.

However the man page does not specify how to set the defaults.

You can create ~/.pastebinit.xml file in your directory with the content similar to the following:

<pastebinit>
    <pastebin>http://paste.ubuntu.com</pastebin>
    <author>author</author>
    <jabberid>author@example.net</jabberid>
    <format>text</format>
</pastebinit>

See the answer on Launchpad.

What is smfpd?

After a major security threat has been discovered in Samsung Unified Linux Driver they decided to create smfpd, Simple MFP Daemon that runs as root and provides access to parallel port via tcp/8822

smfpd : Simple MFP Daemon

Usage :

smfpd [ options ]

Options :

-V n   : message Verbosity level n, default 0
-f     : debug mode: run in foreground, log to stdout
-h     : show help and exit
-i dev : use network device 'dev', default lo
-p n   : listen on port n, default 8822
-v     : show version

This daemon is listens on loopback interface by default and is used for LPT devices only. If you use the USB connection then you don’t need this daemon.

Xerox WorkCentre 3119 gained native SANE support

See this post for up-to-date instructions: Setting up my Xerox WC3119.

/galleries/dropbox/WorkCentre_3119.jpg

Scanner part of Xerox WorkCentre 3119 is natively supported by SANE with xerox_mfp backend. I failed to make the proprietary UnifiedLinuxDriver work with my device and it turned out that it is not needed anymore.

Update: unfortunately, I forgot to create bugreport for these missing lines, so WC3119 is not supported out-of-box. Better late than never…

Moreover, with xerox_mfp the batch scan works properly while with Xerox/Samsung drivers it locked the scanner up.

In order to add udev support, the following lines need to be inserted to /lib/udev/rules.d/40-libsane.rules. This will adjust ACLs accordingly for the device, otherwise you will need to adjust file access modes on /dev/bus/usb manually.

# Xerox WorkCentre 3119 ATTRS{idVendor}=="0924", ATTRS{idProduct}=="4265",
ENV{libsane_matched}="yes"

/etc/sane.d/xerox_mfp.conf should have the following line appended in order for the backend to recognize the device:

#Xerox WorkCentre 3119.
usb 0x0924 0x4265

Tested on Ubuntu Karmic alpha 6.

OpenWRT Logging via Twitter

This won’t work anymore, digest auth is not supported by twitter so see the solution by lostman to get OAuth-capable client!

Update: On Oct 29 2011 I have shut down the twitter account. Here’s how it looked like:

/galleries/dropbox/BeelineRouter%20account.thumbnail.png

/galleries/dropbox/twitter-48x48.png

Have you ever thought of your router twitting status updates? E.g. if the connection goes down and comes back you are notified right away in a light-weight fashion.

Never?

Nevermind, here’s how to do that:

  1. Create an account on twitter.

  2. Create a Basic authentication string with

    echo -n "$username:$password" | base64
    
  3. Use the following shell script, substituting $base64string with the string obtained in previous step:

    #!/bin/sh
    
    TWEET="status=$*"
    
    CONTENT_LENGTH=`echo -n $TWEET | wc -c`
    
    MESSAGE="
    POST /statuses/update.xml HTTP/1.1
    Host: twitter.com
    User-Agent: OpenWRT Twitter
    Accept: application/json, text/javascript, */*
    Accept-Language: en-us,en;q=0.5
    Content-Type: application/x-www-form-urlencoded; charset=UTF-8
    Content-Length: $CONTENT_LENGTH
    Authorization: Basic $base64string
    
    $TWEET"
    
    echo "$MESSAGE" | telnet twitter.com 80 > /dev/null 2>/dev/null
    
  4. Save it as /usr/bin/tweet and start using it right away.

  5. Set up some cron job, make it log something and... follow your router :)

You can follow my router, just in case :)

Beeline IPTV with OpenWRT in Kiev

/galleries/dropbox/beeline-icon.jpg

Beeline Home Internet has started providing IPTV through its LAN. The description of the service is given here.

Since this was my first experience with multicasting I had no clue about what to do, where to get the traffic from and how I can forward it to my LAN.

It turned out to be pretty easy, unfortunately requiring kernel .config adjusting even in the latest OpenWRT trunk. Please note, I am using brcm-2.4 kernel/

Index: trunk/target/linux/brcm-2.4/config-default
===================================================================
--- trunk/target/linux/brcm-2.4/config-default (revision 16833)
+++ trunk/target/linux/brcm-2.4/config-default (working copy)
@@ -373,3 +373,24 @@
# CONFIG_WDTPCI is not set
# CONFIG_WINBOND_840 is not set
# CONFIG_YAM is not set
+CONFIG_IP_MROUTE=y
+CONFIG_IP_PIMSM_V1=y
+CONFIG_IP_PIMSM_V2=y

PIMSM* options are not necessary but I included them to be on a safe side.

Then we need to build the firmware with or add igmpproxy to the already installed firmware.

After this configure igmpproxy in /etc/igmpproxy.conf:

quickleave
phyint eth0.1 upstream  ratelimit 0  threshold 1
altnet 192.168.0.1
phyint br-lan downstream ratelimit 0 threshold 1
phyint ppp0 disabled
phyint lo disabled

You can read about my network setup here.

The IP address for altnet is required to allow the packets from these networks to be routed. The LAN address is 10.0.0.0/8 and IPTV is being broadcasted from 192.168.0.1 in my case. That's why such entry is required. You can also find out the source address by running tcpdump. After your host joins the group (igmpproxy should be already running) you will see a large number of packets going to some multicast address (say 225.225.225.1).

You can start igmpproxy so that it does not go to background, with -d switch.

Having found the source ip, you need to add it to your firewall. For one-time include, do this directly:

iptables -A forwarding_wan -s $source_ip -d 224.0.0.0/4 -j ACCEPT

For long-term solution, add this to /etc/config/firewall:

config rule
    option src wan
    option proto udp
    option src_ip 192.168.0.1
    option dest lan
    option dest_ip 224.0.0.0/4
    option target ACCEPT

You should be ready to start receiving multicasts now from 192.168.0.1. Start VLC and point it to the IP address, say 225.225.225.1 and you should get a picture.

You will also need to add additional firewall rule so that the stream will not stop suddenly. This happens because your ISP gateway (10.22.234.1 in my case) sends subscription queries to your router. These queries will be blocked by default. In order to prevent this, check your gateway IP and add the following rule

config 'rule'
    option 'src' 'wan'
    option 'proto' 'igmp'
    option 'src_ip' '10.22.234.1'
    option 'target' 'ACCEPT'

to /etc/config/firewall

My network setup had one major drawback. My wlan and ethernet are bridged together so I get two networks connected. It was done to share the address space without any additional tricks for firewall and routing. Now this means that even if I start receiving the IPTV signal via the wire, the WiFi network is flooded as well rendering our laptops completely unusable because of the WiFi cards being extremely busy receiving the packets. The router is sending packets fine, though.

This can be made to work properly by creating new vlan out of existing configuration, creating corresponding vlan interfaces, firewall zones and adjusting igmpproxy accordingly. In case you want to get info on how to make this, feel free to comment and I will describe my current setup completely.

Finally, if the client runs Linux, tcpdumps shows that UDP data is flowing but the client does not want to cooperate, check what is the value of the following sysctl:

sysctl net.ipv4.conf.$interface.rp_filter

If it is set to 1 (true) the packets will be filtered by the kernel as their source interface will not match the expected one (see RFC1812, item 5.3.8).

This can be fixed the following way:

sysctl net.ipv4.conf.$interface.rp_filter=0

Feel free to comment the post if you need any additional information.

See also

NAT and Port Forwarding in OpenWRT

If you happen to use port-forwarding with your OpenWRT-powered Linksys WRT54GL, then you must know that there had been a problem that made DNAT unstable after some period of time – the port forwarding stopped working completely or it started redirecting to different ports (weird, isn’t it?), as described in #2558. The bug was marked as fixed two weeks ago, so you may want to give the fixed netfilter nat module a try.

Update: no problems with port forwarding so far, looks like patch is correct.

For those who reach this page looking for the way how to set up port forwarding in OpenWRT without iptables magic, here it is:

/etc/config/firewall:

config redirect
    option  src       $source_interface
    option  src_dport $original_destination_port
    option  dest      $destination_interface
    option  dest_ip   $destination_ip
    option  dest_port $destination_port
    option  proto     $protocol

You can find more examples in default /etc/config/firewall, but here’s how I have set up my SIP forwarding:

# incoming SIP
config redirect
    option  src       internet
    option  src_dport 5060
    option  dest      lan
    option  dest_ip   192.168.1.4
    option  dest_port 5060

One note, you need to run firewall script after corresponding interface initialization. In case underlying device for $source_interface` is down (say ppp link) the rules related to this interface will be skipped. That’s why there is /etc/hotplug.d/iface/20-firewall.