Monday, 13 June 2016

Scottish Football Fans and LinkedIn(2012)

So Scotland is pretty used to not being represented at international football tournaments, so much so that the legend Paul Gascoigne once said, “What do you call a Scot at the World-Cup? Referee!”. It's harsh, incredibly harsh. Yet as a fan of the Scottish game, and no team I can really support in good faith during the Euro 2016 tournament, I’m left with some time on my hands.

So as some of you know, LinkedIn suffered a compromise back in 2012 and it was discovered later on that their password security was a little lacking. Namely they didn't salt their hashes.  I won't go on about this point as it would be akin to flogging a dead horse. However with a little bit of fu I was able to search the LinkedIn dump for particular hashes.

Now missing football, whilst everyone else is getting to play inspired me to look at how popular Scottish Football teams are as passwords within the LinkedIn dump. So I decided to produce a league table of SCOTTISH TEAMS ONLY!!! BECAUSE WE CAN EXCLUDE YOU GUYS TOO ;)

This isn't definitive and only a little fun, but here goes:

1) Rangers Fans – 5669 passwords
2) Celtic Fans – 4509 passwords
3) Hamilton Fans – 4042 passwords
4) Hearts Fans – 2544 passwords
5) Aberdeen Fans – 2008 passwords
6) Dundee Fans – 1226 passwords
7) Albion Fans – 1062 passwords
8) Morton Fans – 956 passwords
9) Montrose Fans – 709 passwords
10) Hibs Fans – 484 passwords
11) Livingston Fans – 418 passwords
12) Motherwell Fans – 188 passwords
13) Falkirk Fans – 184 passwords
14) Kilmarnock Fans – 99 passwords
15) St Mirren Fans – 95 passwords
16) Partick Thistle Fans – 84 passwords
17) Dunfermline Fans – 77 passwords
18) Arbroath Fans – 73 passwords
19) Dumbarton Fans – 67 passwords
20) Forfar Fans – 50 passwords
21) Dundee United Fans – 47 passwords
22) Brechin Fans – 46 passwords
23) Stranraer Fans – 36 passwords
24) Peterhead Fans – 31 passwords
25) Raith Rovers Fans – 29 passwords

Now, granted they be a few false-positives and I didn't have time to search for EVERY Scottish football team, and every permutation of their name. However whilst we can't fly the St. Andrews during the Euros I’d suggest this is a bloody good time for fellow Scottish football fans to go change their passwords.

finux Xx

Wednesday, 24 February 2016

Its double-sided™

Save your time, this is going to be long with high chances of rantiness drizzle throughout it. It seems once a year or so I ended up blogging about something that's more of a reaction piece than anything of any real value. My guess is, this is no different. As many of you may have noticed my engagement with the InfoSec-Community™ has been winding down for awhile. I can't quite put my finger on what the issue is, but I know whatever it is makes me uneasy. Either I have changed (which I have, I hope for the better) or InfoSec-Community™ has. When I started getting involved in community events, it was Security-BSides London, and i'd dare to say that if I hadn't gone my life may have been very different. I made life-long friends at the conference, I learned a lot, and I realised from that moment, we're better community when we can meet and exchange. Since then I’ve always held BSides close to my heart, and as I’ve grown and changed my involvement in BSides events has too. I went from attendee/speaker to organiser (one of them) of the Rookie-Track to organiser of 3 BSides conferences. I tell you what though, for large parts, its a thankless task. For me though, there is a moment that I look out at the event we created and see all the participants and I know in my heart of hearts we did something worth doing. That in some small way we did make a difference.

I guess this is why today's news to Security-BSides organisers is that we have a Board of Directors (BoD) that decided it would be trademarking Security-BSides was blunt knock to the feels. The TL;DR is that a BSides event in Germany (anywhere) will need to be rubber-stamped by one of a few people in America, and an American contract will need to be signed. This contract's jurisdiction (and recourse) will be in California. Apparently there was a discussion on a Google Group and now as an organiser of a German BSides I now need to ask and agree to terms with a Board of Directors that I had no idea owned BSides. I kinda assumed it was all of us that owned it, which yes makes me one of the dumbest freetards on the planet. “But finux, someone needs to protect the global brand” I imagine some of you are saying, and you know what, I think you're wrong. Not just a little wrong, but a whole slice of pie, wrong. BSides events are great because they're a representation of the communities that host them, because no two of them are the same. As a BSides organiser in Europe I can assure you that the sponsors we're getting are from our organisers own networks of contacts and not from some global franchise owners. Our sponsors are interested in what we offer them, not what a Californian judge agrees.

Those that follow the talks I’ve been giving at BSides events (irony is not just a friend of wrinkly it seems) is that building weapons to protect yourself from future perceived attacks is a slippery slope. I really wonder how having a trademark infringement case, by a community against its own community will bare anything other than lose/lose situation. Can we imagine our new Global BoD shutting down a community BSides event. Let that sink in. Filing an injunction in a Californian court, against any BSides event, anywhere, because one of a handful of people decided that “no you can't be a BSides event”. Of course, they can opt not to do that, and not to file an injunction but it sort of precludes you from doing at all. “Oh, you didn't do that against BSidesFFM but you're doing against us”. The problem with developing a structure to shut BSides's events down, is you might have to shut a BSides event down. The reality of it is, all BSides now seem to have a centralised Government. I can assure you all, I had no idea about any discussion about having this until I was told we had it. I've spoken to two of the global BoD less than two months ago, it didn't come up then.

Lets not forget that sponsors, and organisers and the BoD are only a small part of the BSides ecosystem, but did anyone discuss this with the attendees/participants? I mean if there is a global brand, that needs to be protected, then surely they're the stakeholders that give that brand value. I've not asked any participants, but I wonder how our attendees feel about our event happening because a BoD in the United States of Security-BSides™ says we're permitted. Then again, I’m pretty sure most don't care. Apathy is a wonderful thing sometimes.

I'm at a loss for what this really means. Has BSides just became a brand that is to be shaped and governed by a few, and if so, why did we agree to that? What do we get for losing control over our own destiny as events? Will those that are protecting our events from those events organisers actually be adding any benefit to those events? I worry if the next thing we need to do is start paying a stipend to be allowed to use the word BSides, you know because there is costs. Will part of an events sponsorship money been siphoned off to fund shutting down other BSides events on other parts of the world? Who is going to protect us from our BoD's, today, tomorrow, next week, next year, next decade? Everyone on that BoD are good people, and all of my ranting isn't a reflection on those wonderful human-beings, I hope I can say that for the next Directors and their successors.

I know i'm not helping on first glance, but we need to ask ourselves are we just a brand? Are we losing our way? Do bigger BSides events have influence now over how events are managed, and who manages them? But the biggest question is, which one of you made things so that our global BoD feels they need to have control over events they don't organise?

Many BSides events in many different non-US colonies will need to discuss amongst themselves if running with the just the ideals of BSides but without the Security-BSides™ endorsement is an option. I know at BSidesHH we're going to be discussing if we're going to become HamburgSides, or stay the same. The real question is do we fork-off?

Arron 'finux' Finnon

Tuesday, 20 October 2015

OwnCloud in Docker.

So today version 8.2 of OwnCloud is released ( In this blog post we'll look at how you can deploy OwnCloud in Docker with persistent storage.

There really isn't an alternative quite like OwnCloud either. Think Dropbox, but on your infrastructure, think Google-like features without the data-mining.

So for this image we're going to use JChaney's build

You're as usual going to need Docker installed, just head over to and you'll find an installation guide. In this particular example you're also going to need git installed too. We're going to clone jchaney's repository and build our images.

$ git clone 
$ cd owncloud
Then we're going to edit the Dockerfile and change the OWNCLOUD_VERSION from 8.1.3 to 8.2.0
then we just build the owncloud image.  As usual, red indicates things you need to alter for specifically. The naming convention for images is username/imagename.

$ docker build -t alba13/owncloud820 .

I'd suggest altering the Makefile and changing the storage location, which is pointing to /tmp/owncloud (you'll lose your storage when you reboot if you leave it here) to a more useful location (maybe, /srv/docker/owncloud/) and also alter “image_owncloud ?= jchaney/owncloud” to “image_owncloud ?= alba13/owncloud820

$ make owncloud-https

The above command links to the self-signed certs on your host system, however you can edit the Makefile to point to specifically generated certs too. If you want to generate some simple self-signed certs this is a pretty simple guide just remember to edit the Makefile and change /etc/ssl/certs/ssl-cert-snakeoil.pem and /etc/ssl/private/ssl-cert-snakeoil.key to names and locations of your certs.

Now the above command(s) will give you a simple OwnCloud deployment that uses SQLite3 for its DB. For most cases that maybe all you're looking for, but JChaney's image also gives you the ability to set up a MariaDB with persistent storage, and uses the --link option. Which basically means that the MariaDB is only accessible to Docker containers that you specifically link to. Don't think of it as a security control though, just think of it as keeping it all in the Docker family.

You can set this up with the following commands;

$ make owncloud-production
# and to find the DB's details to configure OwnCloud run this;
$ make owncloud-mariadb-get-pw
Then just visit and follow OwnCloud's install options.

Once you're set up and installed you can head over to the "Apps" section and add some more functionality to your OwnCloud install. There is an "Enable experimental apps" section, the name sort of suggests you should have a bit of caution.

So here you go, you're up and running with the latest OwnCloud, all wrapped up in a nice little Docker Container(s).


finux Xx

[note] In your make file you can edit the DOCKER_RUN_OPTIONS ?= --restart=always --env "TZ=Europe/Berlin" and change your Timezone. Also as you see above i've added the –restart=always option. This just means that the images will restart after a reboot or if they crash.

[note2] also, just in case you have any sync issues it might be worth adding "fastcgi_read_timeout 120;" to the nginx.conf file(s) with the other fastcgi perimeters.   (

Monday, 19 October 2015

Docker, and Openfire!

In the past couple of posts ive discussed how you can use docker to deploy Gogs (which we then used as a password manager with Pass), then I blogged about how you can use it to roll out OpenVPN very easily, and I wrapped up how we can use Docker to deploy Bind to block ad-servers. They were pretty short posts, and that in some ways lends itself to how easy Docker is to do deployments. I've said it a lot of times that Docker is like a cross between, Git, apt-get, and Vbox. I'm going to continue this run with a few other blogs this week about other day-to-day services that people may find useful.

So lets start this week off with XMPP, or more to the point Openfire. With Openfire, you can have your own private XMPP server for chat. Openfire is a pretty easy to use server which is even easier to deploy using Docker. Within a few commands you will be able to have your own chat server which can communicate with the rest of the world, but under your control.

As usual you'll need to make sure you have Docker installed. If you don't have it installed then please either visit or use a search engine to find guide for your OS.

We're going to use the great work of Sameersbn again (, and i'd also suggest either using a Dynamic-DNS service like these people (they offer instructions on how to update your sub-domain to your IP address, you'll need it later when you configure Openfire) or buy yourself a domain name, they're pretty cheap after.

So we're going to pull the openfire image down from the the docker hub, however feel free to build the image yourself ('$ docker build -t' ).

So here we go

$ docker run --name openfire -d --restart=always \
  --publish 9090:9090 --publish 5222:5222 --publish 7777:7777 \
  --volume /srv/docker/openfire:/var/lib/openfire \
and there you have it, you've just installed and deployed Openfire in a single command. Now all you need to do is configure it. Go to and follow the install instructions. You'll find a folder on your host system at /srv/docker/openfire with the configuration details if you ever need access to them. Remember you'll need to set up port forwarding on your firewall/router to be able to communicate with the outside world. If you're looking for a client for your Android phone my I suggest grabbing Conversation ( it's pretty nice. Also its worth mentioning Openfire has plugins, you could use those plugins to install Kraken which will give you a Facebook and GTalk transport as well. Which the tl;dr of that is, you can have Facebook IM's and GTalk i'm all rolled in to a single account.

Hope some of you find interest in this post

Finux Xx

Friday, 16 October 2015

using docker to block ads

So i'll continue the Docker run of blogs with another short guide with how we can use docker to block ads. To be fair, this is yet again an example of using a container to do a relatively simple task an make it even easier. The idea here is we're going to containerize a Domain Name Server (DNS), in addition we'll run a script that will pull down ad-servers and block them. Then you just need to point your devices on your network to the DNS and boom, you've reclaimed some bandwidth and saved yourself being exposed to some rather crappy ads.

So yet again, make sure you have docker installed.

We're going to use the excellent work of Sameersbn's bind docker image ( All though i'm going to modify it slightly, as always you can build it with the 'docker build -t' option, \0/ yay!

We're also going to run a container that serves a pixel on port 80. This will be served to whenever our client speaks to our DNS looking for one of the ad domains in a blocklist.

The first time you run the DNS container you'll need to supply it two environmental variables, DN4C (the domain name you want for the container) and IP4C (the IP address of host you're serving from). As usual red indicates specific to you settings. Its pretty simple to be honest, basically;

$ docker run -d -e -e IP4C= -p 53:53 -p 10000:10000 --name adbind -v /srv/docker/bind/:/data arr0n/docker-adbind 

$ docker run -d -p 80:80 --name pixlserv arr0n/docker-pixlserv

There is a script called that's called with the script. This pulls down known ad-servers and when a client requests one of those domains from the blocklist, it will be served a pixel locally. Both images and scripts are available on the Docker-Hub but if you wish to build them yourself you can find them at and Also the DNS container has webmin installed in case you need to do any administration to the server. You'll find a folder called 'data' in your working directory that stores all the configuration files. I'd also suggest running the containers with the --restart=always switch. Now all you'll need to do is point your client to the DNS container and ads will be blocked for you.


Finux Xx


As I said the DNS image we're using is only slightly altered from Sameersbn, I've taken this paragraph from his Github. I obviously suggest you set the ROOT_PASSWORD variable too.

"When the container is started the Webmin service is also started and is accessible from the web browser at http://localhost:10000. Login to Webmin with the username root and password password. Specify --env ROOT_PASSWORD=secretpassword on the docker run command to set a password of your choosing."

Thursday, 15 October 2015

From zero to OpenVPN in.......

So i've talked about this for a little while, and i've decided that I will post another short little guide about it today. I think one of the things I like about Docker (yes, I said like, don't judge me!) is that you get an almost apt-like experience with some cool applications. A great example of this is deploying OpenVPN in next to no time at all.

So this is going to be short and sweet, i'm going to take for granted you have Docker installed on your box. If you don't then hop on to a search engine (why not try Bing, I hear great things about it) and look for a guide about installing Docker on your platform.

We're going to use the excellent work of kylemanna ( the commands below will automatically pull down the image, but as usual feel free to clone and 'docker build' the image. Also, you're going to need a public facing IP address or domain. If you're planning on doing this at home, may I suggest running over to for dynamic dns if you don't already have something. 

Red, indicates your input!

# lets get a data-only container spun up, this will also place a folder in your working directory called openvpn. 
$ docker run --name openvpn-data -v /srv/docker/openvpn:/etc/openvpn busybox

# lets get the config files and certificates set-up certificate 
$ docker run --volumes-from openvpn-data --rm kylemanna/openvpn ovpn_genconfig -u udp://VPN.SERVERNAME.COM
$ docker run --volumes-from openvpn-data --rm -it kylemanna/openvpn ovpn_initpki
# you'll be asked to set some passwords for your OpenVPN's certs.  Whatever you like is cool with me. 
# Let's get the OpenVPN up and running 
$ docker run --volumes-from openvpn-data -d -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn

So you've just deployed OpenVPN in a container, with persistent storage in 4 commands. I know right, it's kinda cool to suddenly be able to have a OpenVPN on any box you can run docker on without being a card carrying member of the sandal brigade. However we're not finished just yet. Lets generate some certificates for our end-users (this is probably you).  Remember that password stuff we did, you'll need the ca one.

# Generate some client config files, remember to change CLIENTNAME to the Name of your Client ;) 
$ docker run --volumes-from openvpn-data --rm -it kylemanna/openvpn easyrsa build-client-full CLIENTNAME nopass

# and lets retrieve the files
$ docker run --volumes-from openvpn-data --rm kylemanna/openvpn ovpn_getclient CLIENTNAME > CLIENTNAME.ovpn

You'll find a .ovpn file in your working directory which should work with most OpenVPN client implementations, however inside that openvpn folder you'll find your client certificate files if you need them. I'd suggest you does this for every device you want to connect to have connected to your OpenVPN container. What I mean is for CLIENTNAME you have PHONE and LAPTOP and OTHERLAPTOP, so on and so forth. Trust me, in the end, makes life easier for you.

That's it, you're up and running with OpenVPN. If you want to autostart your OpenVPN container, so when your box reboots it starts again look into the '--restart=always' switch ($ docker run --volumes-from openvpn-data -d -p 1194:1194/udp --cap-add=NET_ADMIN –restart=always kylemanna/openvpn)

Now for the very cool trick with this, Digital-Ocean. You basically can have OpenVPN in the 'cloud' for 7 cents a day. You can then destroy it once you're done, or have it as a OpenVPN deployment you use when you're out an about. That's your choice. Do me a solid though, if you've not signed for Digital-Ocean and want to try it signup with this link please ( its my referral link, and i'll get some credits on my DO account.

Also, go read the github page from Kylemanna.  Its full of useful information, and its an example of how people who maintain docker-images should document them

Finux Xx 

Wednesday, 14 October 2015

Using docker as a password manager!

Alert, this is click-bait peeps. I'm not really using Docker as a password manager, i'm using a password manager ran inside of a docker container. 

In fact even that's not exactly true, i'm using Docker to run a Git server, and i'm using Pass ( to PGP encrypt password files. I thought I would write a quick little howto guide in case anyone was interested. 

With this particular solution you get a password sync solution that can be used easily on Linux and Andriod, I have no idea how well it runs on Apple or Windows, but i'm guessing you could use Docker to fix that for you too.

So why am I using Docker? Because the revolution will be containerized people! Or more importantly, because I can. Secondly, i've found running a git installation with in Docker to be really, really, easy. So, i've played with GitLab, and yes its very nice and shiny. However I really like Gogs (Go git services, its light weight, does everything we need and its also lightweight. I guess what I like the most about Gogs is how lightweight it is, however experience has taught me when something is lightweight it is also buggy. I'm glad to report Gogs doesn't break the axiom, which makes it ideal for running in a container. No seriously, ideal because its not going to hose everything because its in a container.

So lets start with the obvious, you must have docker installed. I hate that fact we're in a world where I have to say this, but then again we have to tell people not to use hair-driers in showers, so I guess I can't grumble. I'm not going to tell you how to install Docker, because there is nearly 6 million hits on Google and if you can't work that out, this guide is way beyond your pay-grade.

Once you have a working Docker installation we can pull Gogs down (you can build it via the Dockerfile too)

# Pull image from Docker Hub.
$ docker pull gogs/gogs

# Create local directory for volume.
$ sudo mkdir -p /srv/docker/gogs/

# Use `docker run` for the first time.
$ docker run --name=gogs -p 10022:22 -p 10080:3000 -v /srv/docker/gogs:/data gogs/gogs

# Use `docker start` if you have stopped it.
$ docker start gogs

zomg, zomg, zomg, you've just set up git server with persistent storage and it was less than 4 commands. Now here we go, go to your web browser and visit in and follow the install instructions.

Feel free to plug this into your MySQL deployment if you want (as far as I know, this image doesn't have MySQL installed, and it won't be persistent), but i'd suggest you just use the sqlite3 for now (yay, sqlite3 will be persistent as its stored in the /data folder).

I'd also suggest changing “Domain*” to the IP of the box you want to run this container from (i.e. for example), also “Application URL*” (i.e., this will be handy later on). Also, for the love of god remember to change the SSH port too, to 10022.

Why not set up an admin account now, because nothing sucks more than not being an administrator from the get go.

Once that's complete you may get an error from your web browser as it points to (localhost:10080 instead of if this has happened, its because you didn't read the instruction above properly).

Now you're in Gogs (and logged in as your user) select “New Repository” fill in the details, you can name this repository anything you want, I don't care, its a blog post not a lifestyle choice.

Click “Create Repository”

Now click on the “Dashboard” tab again, and then the “Account Settings” button. Once there select SSH keys and add your SSH key there (if you don't know how to generate an SSH key I have no idea how you got this far, but I'm going to do you a solid and give you this link

Now for the fun part, you need some PGP keys. Now you could use the ones you already have, i'm not going to judge you for that, but hey why not generate specific keys for a specific job? If you're going to ignore that piece of advice then that's cool just ignore the next few steps (Just don't forget to install pass). In Ubuntu (or whatever Linux you're running) do the following;

$ sudo apt-get -y install -y pass gnupg #only in ubuntu 

Now you can run the code below in any Linux

$ gpg --gen-key 

The default option “1” is fine, but make sure the keysize is 4096 in the next option, and the final default option is fine too. Fill in the other options, however when you get to the passphrase option choose something strong!!!! I mean, this is going to be your master-password for your password-manager so lets no choose password123 or some other equally dumb password.

You'll need to move the mouse around a bit and maybe type whilst gpg is getting some entropy, this bit always sucks for me, it might suck for you too. Practice some patience, you'll get there.

Once that's done, you need to grab the key-id like this;

$ gpg --list-key

Then you need to initiate the key with pass show it knows which key to encrypt your passwords with

$ pass init D64AA6BE 
$ cd .password-store/
$ git init
$ git config --global "your email address here" 
$ git config --global "your username here"
$ touch
$ git add
$ git commit -m "first commit"
$ git remote add origin ssh://git@ 
$ git push -u origin master
The stuff in the red is specific to you! Now lets generate a password with pass and push it to gogs-container

$ pass generate blogpost/test 24
$ pass git push -u origin master

You can view your password with the following command

$ pass show blogpost/test

I'd also suggest this;

$ man pass 

and then read the documentation.  If you're going to use a password manager it makes sense to read the documentation.

You'll be asked to enter your gpg password and boom, you'll have your password in front of you for copying and pasting. Superb, now a decentralized password manager. Which is cool, you can now sync your passwords on your all your Linux boxes. But that's not just it. You can also use the Android app too (there is an IOS app but I know nothing about it) ( they're few GUI's as well, but you'll find out more info at

Side-note, you'll need to put your SSH key and Your GPG keys on your phone to sync/pull/push, don't be stupid and gmail them to your self. Get a USB cable and do it that way.

The documentation is pretty good, but it'll take you a little while to get used to it. I like this solution, its easy to run and deploy, and I don't have to trust anyone with my passwords.

This was suppose to be a fun little guide to get your up and running, nothing definitive. If this isn't the thing for you, then that's okay too. Have fun


[UPDATE] if you get this error message;

"Binary and template file version does not match, did you forget to recompile?"

Run this command

$ sudo rm -rf /srv/docker/gogs/templates