Archive for category Posts

OpenELEC on RaspberryPi first rev

I was testing my raspberry pi with raspbmc and openelec for sometime before deciding to stick with OpenELEC when the 4.0 betas started being released.
Everything worked great but I had some trouble with youtube and in general streaming medias (like Comedy Central shows) as they were casually brining the machine to a grinding halt followed by a manual unplug-plug reboot, or in case of OpenELEC you could wait it to self restart XBMC.

Finally I found a video that was causing it always after e cetrain period of time, and thanks to XBMC debuging logs I noticed that the free ram was just slowly decreasing until the lack of memory totaly annihilated the machine as unfortunately these Pis have only 256MB of ram, half of which is usually dedicated to decode HD videos.

I tried to fix it in various ways. Even adding a a tiny swap partitions on the non-aligned empty disk space left by OpenELEC installer between the system and the data partitions.
Adding them (using parted on the SD card from my laptop) removed completly the problem, but it requires also to add a cron task to add the swap partitions on boot as OpenELEC developers have their reason to avoid id (just search in their forum for the answers).

With some testing I found a compromising setup that makes everything work, and also made the UI incredibly more responsive while playing even it’s set only to moderate overlocking (800Mhz).

I reduced in the config.txt file the gpu_mem to 108 (128-16-4, a value obtained by trial & errors) and by appending to cmdline.txt the parameter zcache. I haven’t played with the device dynamic memory allocation as it’s as time of writing unsopported

Advertisements

, , , , , , ,

Leave a comment

Nginx Raspberry Pi Configuration

This is more or less the configuration I am using for my installation of Nginx on a raspberry pi.

I compiled it with these options:
./configure --prefix=/opt/nginx --with-http_ssl_module --with-http_spdy_module --with-file-aio --with-pcre-jit

I have disabled error logging by default, I only use nginx in debug mode if I can see any error.
Set it to a “safe” user and limit the number of processes to spawn.


error_log /dev/null crit;
user nobody;
worker_processes 1;
pid /var/run/nginx.pid;

Then the http configuration starts.
keepalive_timeout is set to 60 to allow some of my backend application to run and answer, the default value was causing the first try to fail while the application was loaded from swap or whatnot.

http
{
keepalive_timeout 60;
gzip on;

This is the server configuration. I enabled ssl and spdy disabling access logging, basic auth and set /var/www as the root folder.

server
{
listen 8080 default_server ssl spdy;
spdy_headers_comp 3;
access_log off;
auth_basic            "Restricted";
auth_basic_user_file  htpasswd;
ssl_certificate      /opt/nginx/cert.pem;
ssl_certificate_key  /opt/nginx/key.pem;
root /var/www;
index index.html;

Then I started adding the backend application that are reverse-proxy-ed by nginx (inside the server block).
Well, change app, host, and port to whatever you need.
The proxy timeout configuration is higher to avoid receiving errors too fast when the application is just.. slow.., and that happends quite easily with big python applications on a first gen raspberry pi

location /app {
proxy_pass          http://host:port;
proxy_set_header    X-Real-IP  $remote_addr;
proxy_read_timeout  300;
proxy_send_timeout  300;
}

And in case you also want to make accessible via http a part of your hard drive just add (always inside the server block)

location /mount {
autoindex on;
proxy_read_timeout 300;
proxy_send_timeout 300;
}

Leave a comment

USB Transfert making the system unresponsive fix

Sometimes my computer became unresponsive and couldn’t do any action while copying to slow USB Keys. I found online the reason and the solution to this.

Apparently the configuration /sys/kernel/mm/transparent_hugepage/khugepaged/defrag can really influence this issue. If it’s set to “always” the machine will try to defragment pages to have space for huge pages.
Setting it to “madvise” will active the behavior only on address spaces that require it without interfering with the feature (disabling it) ,still allowing HugePages aware applications to use them, and, espescially, will stop your machine from hanging.

As state in a Phoronix article linked at the bottom of the page since Kernel 3.8 this feature avoids allocating huge chunks of memory when not necessary, so maybe setting /sys/kernel/mm/transparent_hugepage/khugepaged/enabled to “always” may be a good idea after all.

Here you can read more about the HugePages support
https://www.kernel.org/doc/Documentation/vm/transhuge.txt

Here you can see some benchmarks
http://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/

Here you can read more about it
http://www.phoronix.com/scan.php?page=news_item&px=MTI1MTQ
http://www.phoronix.com/scan.php Za s da ?page=article&item=linux_transparent_hugepages

,

Leave a comment

Reverse Proxy for RaspberryPi

Edit

At the moment I am using anyway nginx as it’s kind of very easy to configure and does absolutely everything I need with almost no resource usage.
More information on the configuration can be found on its post.

I have various web services that run on my raspberry pi, probably like many other raspberry users out there.

I used Apache and lighttpd as reverse proxies to avoid opening many ports (and remembering them after) but instead just use one. But, having only 256MB of ram and seeing that usually both application consumed more quite some memory, while being anyway most of the time unused I wrote a tiny alternative in python. It works even with pypy, but that will just make you lose the memory saving advantage, so at that point you can just keep on using the reverse proxies I mentioned.

The program itself is not complete yet for my needs; I am planning to add direct SSL support and a tiny web server to serve an index of the various services I have, and IPv6 on the server side (it will try to connect to IPv6 clients if possible already)

So far I have managed to consume 1/3 less of ram of what lighttpd was using with no issues. I was even thinking about a possible xinetd version of it, but that would be probably even worse considering how many connection modern borwser create.. it will most probably just kill the machine forking too much or making the browser unable to open enough connections.. I’ll do some tests

You can find the link to the project in the code page or just here git://github.com/mellon85/proxy.git

,

Leave a comment

Postfix Admin with domain quotas

Recently I adapted a patch which i found (I’m sorry original author, I can’t find which was the source…) somewhere on the internet, and ported it to Postfix Admin to the latest stable version 2.3.2 to change the behaviour of the quotas.
The default consists on setting the maximum quota a user in the domain may have, but what I needed is a maximum quota that the domain could use, which is slightly different but more useful from my point of view.

I have posted it in pastebin

Leave a comment

VA-API and AMD XVBA

Lately i was just getting mad that i couldn’t watch any 1080p video on my desktop pc which has a Radeon HD4890 (ATI Catalyst 10.1) and a Phenom X3 730. MPlayer was even teasing me with this message…

************************************************
**** Your system is too SLOW to play this! ****
************************************************

this pisses me off, a lot…
so I decided to try to install VA-API with XVBA backend and the patches for mplayer to use it.

Since there is no packaging whatsoever for any of the things needed in ubuntu/debian repositories (not even in the unstable ones) i got the packages directly from splitted-desktop.

I got the latest xvba-video (0.6.4-1), libva(-dev) (0.31.0-1+sds9) packages (nice to see that VA-API has a frontend even for VDPAU by the way), and the latest version of MPlayer va-api patches (the full version contains mplayer code, the small one has a script to get it with svn).

The installation of the packages went fine, but to get mplayer compiled a run of
apt-get build-dep mplayer
is necessary to install all the development tools/headers and so on.

Then i used their bundled script “checkout-patch-build.sh” which does all the (as you may have already guessed) svn checkout, patching, building of mplayer.
It went flawlessy and in a while i got mplayer ready to run.

i tried it with a h264 1080p movie
./mplayer -vo vaapi -va vaapi movie.mkv
and it worked out like a charm, instead of hogging a CPU and still not doing real time decoding it was using only 8% of CPU time!

It is still unstable (at the moment, it works for me anyway) and the VA-API are not broadly implemented (hoping this will change in the near future, it provides Intel, Nvidia and ATI accelerated decoding at once!) but it really does it’s job well!
If you want to try more splitted-desktop has patches for gnash too

, , , , , , ,

1 Comment

Hamachi and UDP Broadcast Games

I was recently trying to play some old game (Starcraft, Warcraft 3, and others) in LAN with some friends through the hamachi VPN system. All attempts were unsucessful due to the fact that the broadcasted UDP packet weren’t sent over the VPN interface but only in the local LAN.

To solve this we just had to manipulate the routing table of the systems.
After connecting to the hamachi VPN just do these commands as root user

OS X: as long hamachi uses 5.x.x.x for his nodes
route add -net 255.255.255.255 5.255.255.255

Linux: where ham0 is the hamachi interface. There are 2 options available, test it out that works for you
route add -net 255.255.255.255 netmask 255.255.255.255 dev ham0

sudo route add -n 5.255.255.255 gw 5.0.0.1 dev ham0
as suggested from Blake Lightly

This will redirect all the broadcast packets, maybe it can interfere with some other program, but as long as you want some time off to play it works great. To clear the modification to the routing table we didn’t do anything else then disconnecting from the hamachi VPN, the system handles the disappareance of the interface removing every involved route.

Summing up*

Game Linux MacOS
Warcraft 3 play host
Starcraft host host
Delta Force host untested

* Linux has no other setup then just starting hamachi. This all was tested between a Mac and a Linux systems

If someone can test other games and maybe help on the linux problem that would be great.

Hope this helps!
[last update 2 May 2010]

, , , , , , , , ,

19 Comments