Patch for mod_apcupsd.pl

I’ve got 2 APC UPS connected to a Raspi3 and use apcupsd for getting the status data. The Raspi provides the status data via SNMP to my Zabbix installation.

For this to work I got this snmpd mod. Unfortunately this only queries the default APC via apcaccess.

I hacked together (I dunno nothing about perl) a patch to support multiple UPS.
You can find this after the break. Continue reading

Remote capturing with wireshark

I always forget these parameters so better keep ’em here.

Full Console output:

tcpdump -nnXSs 0 -i eth0 port 389

Remote cap:

ssh root@HOST tcpdump -U -s0 -i eth0 -w - 'not port 22' and port 389 | wireshark -k -i -

web filtering with SSL Man-In-The-Middle

I work for a school and we need to filter web traffic and block offending sites.

Blocking urls and filtering unencrypted http traffic doesn’t do the job anymore since more sites are using https – which is a good thing I strongly support and encourage.
Example?¬† Google uses https, youtube uses https. We can’t block youtube completely since there is content which might prove useful for learning reasons. So youtube needs to pass. But we need to be able filter porn, racism and stuff in the video descriptions.

So now we have a problem: Govt says we need to protect our children from inappropriate contents, I don’t want them to waste my bandwidth with porn/music either. But depending on how you look at it, unencrypting encrypted traffic is also a bit… meh.

So ethics aside we need to filter https traffic by utilizing a man-in-the-middle attack.
I’m using e2guardian for this which is a fork of the now defunct dansguardian which I used previously.

e2guardian, or E2G for short, brings SSLMITM out of the box. The problem: It doesn’t work. At least on Debian.

2016.10.5 9:56:32 - 10.4.6.87 https://www.google.de:443 *DENIED* Certificate supplied by server was not valid: unable to get local issuer certificate CONNECT 0

e2guardian is using openssl directly to grab the certificate presented from the target host. By default openssl trusts no one and therefore throws an error that it can’t verify the integrity of the target host.
See:

 openssl s_client -connect google.com:443
 CONNECTED(00000003) depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
 verify error:num=20:unable to get local issuer certificate
 verify return:0
 ...
 Start Time: 1407377002
 Timeout : 300 (sec)
 Verify return code: 20 (unable to get local issuer certificate)

So we need to create a trusted CA bundle like the one in firefox.
Or we go here and grab a readily extracted bundle.

Using this, things look a little bit better:

openssl s_client -connect google.com:443 -CAfile cacert.pem | grep Verify
depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
verify return:1
depth=1 C = US, O = Google Inc, CN = Google Internet Authority G2
verify return:1
depth=0 C = US, ST = California, L = Mountain View, O = Google Inc, CN = *.google.com
verify return:1
 Verify return code: 0 (ok)

Looking at e2guardian things still look the same. Of course, because E2G doesn’t know about our cacert.pem, neither does openssl.
Issuing the above command without specifying -CAfile leads to the same results as before.
The problem is, that there is no option to specify the CAfile in openssl.cnf. There are some options which look kinda promising but they are only for signing certs.
There is an option in e2guardian.conf

#sslcertificatepath = '/etc/e2guardian/trustedCAs/'

but I couldn’t get this to work.

So back to work on openssl. An strace shows that openssl s_client without the -CAfile option doesn’t care about the openssl.conf at all. But it checks a hardcoded folder:

open("/usr/lib/ssl/cert.pem", O_RDONLY) = -1 ENOENT (No such file or directory)

Placing (and renaming) our cacert.pem at this location not only does the job for openssl but even e2G is working now.

The next thing is to figure out how we’re going to deploy our own CA to the clients. In MS IE this is no prob due to GPOs but we may want FF working also…
And I need to figure out if I really want to be able to potentially read our users banking details or if I implement some idiot-proof switch the teacher has to press (and not forgets to press!) to enable SSLMITM if he uses a computer room with lil’ children instead of some semi-adults.

Edit: There was a ticket for openssl in 2014 regarding the issue.

Replacing strings inside a specific filetype

Recently I made some spelling mistake (due to lack of knowledge) in a php project. Until I realized I had around 40 files with the typo in various places like table & function names, comments and so on.

To search for the affected files is pretty easy:

find . -type f -name "*.php" | xargs -l10 grep "tyrpo

To replace the typo as usual sed comes to the rescue:

find -type f -name "*.php" -exec sed -i 's/tyrpo/typo/' {} \;

Squeeze: Virtualbox – routing on vboxnet0

I Installed Virtualbox recently on my Thinkpad to setup 3 Linux guests for a lab environment.
The guests are configured with the Host-Only adapter. The Host-Only adapter adds a virtual network interface vboxnet0 to the host system (default ip: 192.168.56.1) and provides dhcp for the guests.
NAT mode was no option since I need to access the guests from the host network. Bridge-Mode was also no option since my thinkpad is configured for dhcp (office and home use). In bridge mode my guests would get a different IP depending on if I’m at work or at home. I’d have to reconfigure my external tools everytime I switch networks or go for static IPs which would led to routing problems.
Host-network seemed to be the most viable option here.

While the guest can communicate through the host-only adapter there’s the problem that they have no internet access because vboxnet0 is not routed.
To get this working, enable ip_forwarding and proxy-arp.

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp

Now add add default route in your guests pointing to the vboxnet0 iface:

route add default gw 192.168.56.1

And finally enable NAT on the host:

iptables -A POSTROUTING -t nat -s 192.168.56.0/24 -j MASQUERADE

You now have a working DHCP guest network even if your not connected to the LAN and you additionally have internet when your connected to the lan.