Tomb and Thin Provisioning

Tomb is a real nice encryption tool.

Unfortunately I had to store some data which might or might not grow “a lot” and where it’s stored there isn’t much space to begin with.. and Tomb doesn’t let you create volumes with, although less secure, non-randomized data source.

I made a very small patch to the latest version (2.1.1) to instead create/resize sparse files as disk images. This is clearly dangerous for data that must be absolutely protected as you give away some informations on the content of the image that might help an attacker.

If your use case doesn’t require such a high level of security you can use it and delay the space allocation. For instance a clean disk, allocated for 32GB of space occupies much less (when empty).


tomb dig -s 32768 secret.tomb
tomb lock secret.tomb -k secret.tomb.key

du -h secret.tomb
133M secret.tomb

The current 133M blocks are mostly used by the filesystem itself but you can get that down even more by first allocating a small disk, changing the reserved block count for the filesystem and then resize it to a bigger disk.


tomb dig -s 10 secret.tomb
tomb lock secret.tomb -k secret.tomb.key
tomb resize -s 10 secret.tomb -k secret.tomb.key

du -h secret.tomb
5.1M secret.tomb

The change is very simple but I don’t want to share it as a patch or send it upstream due to the weakened security. Hence I will just point out that instead of using count in the dd commands for dig and resize you should use seek.

, , , ,

Leave a comment

OpenELEC on RaspberryPi first rev

I was testing my raspberry pi with raspbmc and openelec for sometime before deciding to stick with OpenELEC when the 4.0 betas started being released.
Everything worked great but I had some trouble with youtube and in general streaming medias (like Comedy Central shows) as they were casually brining the machine to a grinding halt followed by a manual unplug-plug reboot, or in case of OpenELEC you could wait it to self restart XBMC.

Finally I found a video that was causing it always after e cetrain period of time, and thanks to XBMC debuging logs I noticed that the free ram was just slowly decreasing until the lack of memory totaly annihilated the machine as unfortunately these Pis have only 256MB of ram, half of which is usually dedicated to decode HD videos.

I tried to fix it in various ways. Even adding a a tiny swap partitions on the non-aligned empty disk space left by OpenELEC installer between the system and the data partitions.
Adding them (using parted on the SD card from my laptop) removed completly the problem, but it requires also to add a cron task to add the swap partitions on boot as OpenELEC developers have their reason to avoid id (just search in their forum for the answers).

With some testing I found a compromising setup that makes everything work, and also made the UI incredibly more responsive while playing even it’s set only to moderate overlocking (800Mhz).

I reduced in the config.txt file the gpu_mem to 108 (128-16-4, a value obtained by trial & errors) and by appending to cmdline.txt the parameter zcache. I haven’t played with the device dynamic memory allocation as it’s as time of writing unsopported

, , , , , , ,

Leave a comment

Nginx Raspberry Pi Configuration

This is more or less the configuration I am using for my installation of Nginx on a raspberry pi.

I compiled it with these options:
./configure --prefix=/opt/nginx --with-http_ssl_module --with-http_spdy_module --with-file-aio --with-pcre-jit

I have disabled error logging by default, I only use nginx in debug mode if I can see any error.
Set it to a “safe” user and limit the number of processes to spawn.


error_log /dev/null crit;
user nobody;
worker_processes 1;
pid /var/run/nginx.pid;

Then the http configuration starts.
keepalive_timeout is set to 60 to allow some of my backend application to run and answer, the default value was causing the first try to fail while the application was loaded from swap or whatnot.

http
{
keepalive_timeout 60;
gzip on;

This is the server configuration. I enabled ssl and spdy disabling access logging, basic auth and set /var/www as the root folder.

server
{
listen 8080 default_server ssl spdy;
spdy_headers_comp 3;
access_log off;
auth_basic            "Restricted";
auth_basic_user_file  htpasswd;
ssl_certificate      /opt/nginx/cert.pem;
ssl_certificate_key  /opt/nginx/key.pem;
root /var/www;
index index.html;

Then I started adding the backend application that are reverse-proxy-ed by nginx (inside the server block).
Well, change app, host, and port to whatever you need.
The proxy timeout configuration is higher to avoid receiving errors too fast when the application is just.. slow.., and that happends quite easily with big python applications on a first gen raspberry pi

location /app {
proxy_pass          http://host:port;
proxy_set_header    X-Real-IP  $remote_addr;
proxy_read_timeout  300;
proxy_send_timeout  300;
}

And in case you also want to make accessible via http a part of your hard drive just add (always inside the server block)

location /mount {
autoindex on;
proxy_read_timeout 300;
proxy_send_timeout 300;
}

Leave a comment

USB Transfert making the system unresponsive fix

Sometimes my computer became unresponsive and couldn’t do any action while copying to slow USB Keys. I found online the reason and the solution to this.

Apparently the configuration /sys/kernel/mm/transparent_hugepage/khugepaged/defrag can really influence this issue. If it’s set to “always” the machine will try to defragment pages to have space for huge pages.
Setting it to “madvise” will active the behavior only on address spaces that require it without interfering with the feature (disabling it) ,still allowing HugePages aware applications to use them, and, espescially, will stop your machine from hanging.

As state in a Phoronix article linked at the bottom of the page since Kernel 3.8 this feature avoids allocating huge chunks of memory when not necessary, so maybe setting /sys/kernel/mm/transparent_hugepage/khugepaged/enabled to “always” may be a good idea after all.

Here you can read more about the HugePages support
https://www.kernel.org/doc/Documentation/vm/transhuge.txt

Here you can see some benchmarks
http://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/

Here you can read more about it
http://www.phoronix.com/scan.php?page=news_item&px=MTI1MTQ
http://www.phoronix.com/scan.php Za s da ?page=article&item=linux_transparent_hugepages

,

Leave a comment

Reverse Proxy for RaspberryPi

Edit

At the moment I am using anyway nginx as it’s kind of very easy to configure and does absolutely everything I need with almost no resource usage.
More information on the configuration can be found on its post.

I have various web services that run on my raspberry pi, probably like many other raspberry users out there.

I used Apache and lighttpd as reverse proxies to avoid opening many ports (and remembering them after) but instead just use one. But, having only 256MB of ram and seeing that usually both application consumed more quite some memory, while being anyway most of the time unused I wrote a tiny alternative in python. It works even with pypy, but that will just make you lose the memory saving advantage, so at that point you can just keep on using the reverse proxies I mentioned.

The program itself is not complete yet for my needs; I am planning to add direct SSL support and a tiny web server to serve an index of the various services I have, and IPv6 on the server side (it will try to connect to IPv6 clients if possible already)

So far I have managed to consume 1/3 less of ram of what lighttpd was using with no issues. I was even thinking about a possible xinetd version of it, but that would be probably even worse considering how many connection modern borwser create.. it will most probably just kill the machine forking too much or making the browser unable to open enough connections.. I’ll do some tests

You can find the link to the project in the code page or just here git://github.com/mellon85/proxy.git

,

Leave a comment

Postfix Admin with domain quotas

Recently I adapted a patch which i found (I’m sorry original author, I can’t find which was the source…) somewhere on the internet, and ported it to Postfix Admin to the latest stable version 2.3.2 to change the behaviour of the quotas.
The default consists on setting the maximum quota a user in the domain may have, but what I needed is a maximum quota that the domain could use, which is slightly different but more useful from my point of view.

I have posted it in pastebin

Leave a comment

Concat videos with srt subtitle

As lately I am encountering a lot of split video files with .srt subtitle i just wrote a simple script that will concat the subtitles with the correct timing shift.
It just requires mencoder and mplayer installed to do the concatenation operation. The script does not support mkv files as they have embedded subtitles and it needs some mkv toll operation, but usually mkv files are not senslessly splitted anyway

All the code can be found on github

Usage is really simple.
If you have file1.avi file2.avi file1.srt file2.srt just do:

perl concat-movie.pl file1.avi file2.avi output.avi

and you’ll have output.avi and output.srt ready to be used, or even:

perl concat-movie.pl file1.avi file2.avi output.mp4

and the output will use .mp4 as container (check mplayer manuals for the formats)
Here is the script.

#!/usr/bin/perl
use strict;
use warnings;
use Carp;

sub add_time($);
sub shift_sub($$$);

croak "Wrong number of arguments" if (@ARGV != 3);

my $file1     = shift;
my $file2     = shift;
my $output    = shift;

$output =~ /(.*)\..*/;
my $srt_output = "$1.srt";
$file1 =~ /(.*)\..*/;
my $srt1 = "$1.srt";
$file2 =~ /(.*)\..*/;
my $srt2 = "$1.srt";

croak "Input $file1 non existent"    if (!-e$file1);
croak "Input $file2 non existent"    if (!-e$file2);
croak "Input $srt1 non existent"    if (!-e$srt1);
croak "Input $srt2 non existent"    if (!-e$srt2);
croak "Output $output exists"        if (-e$output);
croak "Output $srt_output exists"    if (-e$srt_output);

# get file1 length
my $file1_length = 0;
open(INFO,"mplayer -vo null -nosound -frames 0 -identify '$file1'|");
foreach() {
    $file1_length = $1 if(/^ID_LENGTH=(.*)\..*/);
}
close INFO;

# convert $file1_length in $h,$m,$s
my ($h,$m,$s);
$s = $file1_length % 60;
$m = int ($file1_length % (60*60))/60;
$h = int ($file1_length % (60*60*24))/(60*60);

# concatenate the subs
shift_sub($srt1,$srt2,$srt_output);

# concatenate the movie
system("mencoder -ovc copy -oac copy '$file1' '$file2' -o '$output'");

sub shift_sub($$$) {
    my ($file1,$file2,$output) = @_;

    open OUTPUT, ">$output" or croak "can't open $output";
    open FILE, $file1 or croak "can't open $file1";
    my $idx_shift = 0;
    while(!eof(FILE)) { # pass trough storing maximum starting index
        $_ = ;
        print OUTPUT;
        if (/^(\d+)\r\n$/) {
            chomp;
            $idx_shift = $_;
        }
    }
    close FILE;

    open FILE, $file2 or croak "can't open $file2";
    while(!eof(FILE)) {
        # srt idx
        my $num = ; # print index
        print OUTPUT $num+$idx_shift."\r\n";

        # srt time
        my $timing = ;
        my ($start,$end) = split(/\s-->\s/,$timing);
        print OUTPUT add_time($start)." --> ".add_time($end)."\r\n";

        # srt text
        my $line;
        do {
            $line = ;
            print OUTPUT $line;
        } while (!($line =~ m/^\r\n$/));
    }
    close FILE;
}

sub add_time($) {
    my ($hh,$mm,$ss,$sss) = split(/[:,]/,$_[0]);
    $ss += $s; $mm += $m; $hh += $h;
    if ( $ss > 60 ) { $mm++; $ss -= 60; }
    if ( $mm > 60 ) { $hh++; $mm -= 60; }
    return sprintf("%03d:%02d:%02d,%03d",$hh,$mm,$ss,$sss);
}

2 Comments