Archive for November, 2013

Nginx Raspberry Pi Configuration

This is more or less the configuration I am using for my installation of Nginx on a raspberry pi.

I compiled it with these options:
./configure --prefix=/opt/nginx --with-http_ssl_module --with-http_spdy_module --with-file-aio --with-pcre-jit

I have disabled error logging by default, I only use nginx in debug mode if I can see any error.
Set it to a “safe” user and limit the number of processes to spawn.


error_log /dev/null crit;
user nobody;
worker_processes 1;
pid /var/run/nginx.pid;

Then the http configuration starts.
keepalive_timeout is set to 60 to allow some of my backend application to run and answer, the default value was causing the first try to fail while the application was loaded from swap or whatnot.

http
{
keepalive_timeout 60;
gzip on;

This is the server configuration. I enabled ssl and spdy disabling access logging, basic auth and set /var/www as the root folder.

server
{
listen 8080 default_server ssl spdy;
spdy_headers_comp 3;
access_log off;
auth_basic            "Restricted";
auth_basic_user_file  htpasswd;
ssl_certificate      /opt/nginx/cert.pem;
ssl_certificate_key  /opt/nginx/key.pem;
root /var/www;
index index.html;

Then I started adding the backend application that are reverse-proxy-ed by nginx (inside the server block).
Well, change app, host, and port to whatever you need.
The proxy timeout configuration is higher to avoid receiving errors too fast when the application is just.. slow.., and that happends quite easily with big python applications on a first gen raspberry pi

location /app {
proxy_pass          http://host:port;
proxy_set_header    X-Real-IP  $remote_addr;
proxy_read_timeout  300;
proxy_send_timeout  300;
}

And in case you also want to make accessible via http a part of your hard drive just add (always inside the server block)

location /mount {
autoindex on;
proxy_read_timeout 300;
proxy_send_timeout 300;
}

Advertisements

Leave a comment

USB Transfert making the system unresponsive fix

Sometimes my computer became unresponsive and couldn’t do any action while copying to slow USB Keys. I found online the reason and the solution to this.

Apparently the configuration /sys/kernel/mm/transparent_hugepage/khugepaged/defrag can really influence this issue. If it’s set to “always” the machine will try to defragment pages to have space for huge pages.
Setting it to “madvise” will active the behavior only on address spaces that require it without interfering with the feature (disabling it) ,still allowing HugePages aware applications to use them, and, espescially, will stop your machine from hanging.

As state in a Phoronix article linked at the bottom of the page since Kernel 3.8 this feature avoids allocating huge chunks of memory when not necessary, so maybe setting /sys/kernel/mm/transparent_hugepage/khugepaged/enabled to “always” may be a good idea after all.

Here you can read more about the HugePages support
https://www.kernel.org/doc/Documentation/vm/transhuge.txt

Here you can see some benchmarks
http://structureddata.org/2012/06/18/linux-6-transparent-huge-pages-and-hadoop-workloads/

Here you can read more about it
http://www.phoronix.com/scan.php?page=news_item&px=MTI1MTQ
http://www.phoronix.com/scan.php Za s da ?page=article&item=linux_transparent_hugepages

,

Leave a comment