↓ Archives ↓

Category → Uncategorized

Affordable* Self-Updating WordPress Installs in 5 Minutes

* Cost is as-much-as-you-want with a mininum of €1. Please read my last paragraph about pricing.

I just counted: As it turns out, I am the proud maintainer of 19 different WordPress installations. How did I get here? I have no idea. It probably has to do with some bad life choices. “Let’s just spin up a WordPress for this!” sounded totally doable at the time. But then what?

“Maintaining WordPress installations is educational, fun, and quite easy.”
— said noone ever.

WordPress is brittle. WordPress plugins and themes are often created by folks who are just learning how to code. What you want to do is update WordPress and all plugins and themes *all* *the* *time*. Otherwise, very bad things happen sooner than you may think. Trust me, been there, done that.

The wish list

After a few WordPress installs I had hosted for a friend who promised to “make all updates all the time, of course” got hacked, I started looking for a way to solve this problem once and (hopefully) for all. This is the wishlist I started out with:

  • It should be quick and easy to “spin up a new WordPress”.
  • WordPress installs (including plugins, themes) should update automatically.
  • Sites should be backed up automatically.
  • All plugins and themes should be under version control.
  • Sites should use SSL and I don’t want to be bothered with certificate renewals.
  • It should be affordable – most of the WordPress sites I run are just-for-fun projects that don’t make any money.
  • Nice to have: Easily install plugins on all WordPress installs at once

The solution involves Uberspace, Ansible, Let’s Encrypt, Composer, and Bedrock — a most awesome combination as it turns out.

Here it goes:

1. Get an Uberspace (23 seconds)

This is easy, head to uberspace.de and sign up for an Uberspace.

Uberspace is an awesome command-line powered hosting provider with great features and support which lets you pay what you want (with a minimum of 1 Euro per month).

2. Configure your DNS (49 seconds)

Make note of your IP address(es) within the Uberspace dashboard and edit your DNS records so that your domains point to your Uberspace. Since Uberspace is fully IPv6 ready, you might want to set up both A and AAAA records.

3. Create your Bedrock fork (14 seconds)

Go to my fork of the Bedrock repo on GitHub and fork it yourself.

Bedrock is a Composer-powered WordPress boilerplate. The idea is that you keep your plugins and themes in a Git repo and reference publicly available third party themes and plugins using Composer, a dependency manager for PHP. For every WordPress site, you’ll create a fork of Bedrock. Read this handy guide to learn how to add themes and plugins to your Bedrock WordPress.

4. Install Ansible (42 seconds)

On the Mac with Homebrew, a simple brew install ansible will do. For more ways to install Ansible, see the docs.

Ansible is an IT automation engine (their words, not mine) which is basically scripted multi-server SSH on steroids. In Ansible, you create “playbooks” which are scripts that run stuff on servers.

5. Clone the Uberspace Playbook (13 seconds)

Simply run git clone git@github.com:yeah/ansible-uberspace.git to get the latest and greatest version of my Uberspace Playbook.

This is where the magic happens. The Uberspace Playbook contains all the configurations necessary to set up awesome WordPress hosting on Uberspace.

6. Add your Uberspace as Ansible inventory (2 minutes)

Within your copy of the Uberspace Playbook, copy uberspaces.example to uberspaces and add your Uberspace host and username. You can add as many Uberspaces here as you want. Ansible will install them all together.

Next, copy host_vars/UBERSPACE_NAME.UBERSPACE_HOST.uberspace.de.example to a new file named without the .example suffix and replace UBERSPACE_NAME with your username and UBERSPACE_HOST with your Uberspace host.

Now, edit the file you just created and add the domains you set up previously. Choose an internal name for your WordPress install, modify bedrock_repo to point to your Bedrock fork, and specify the domains (again). If you want to use your Uberspace for other things besides WordPress, add all domains you’re using to the domains section at the beginning and only those which should point to a WordPress instance to the respective domains within wordpress_instances. You can of course have as many domains and subdomains as you want and you can run as many WordPress instances as you want.

7. Run Ansible! (39 seconds)

Within your copy of the Uberspace Playbook, run ansible-playbook --ask-pass site.yml.

That’s it. You’re done. Enjoy your fresh new auto-updating, SSL-encrypted and backed up WordPress by navigating to https://yourdomain.com/wp/wp-admin.

Be sure to check out the Uberspace Playbook’s source code to learn what actually happens in the background.

If you need to add themes or plugins, simply update, commit and push your Bedrock fork and run the playbook again.

8. Hold on a minute and think about pricing

While the kind folks at Uberspace do allow you to pay as little as €1 for an account, a generous offer like that is not sustainable. Uberspace recommends that you pay between €5 and €10 per month which is still a great deal given the flexibility and support you’re getting.

If you’re making money from the sites you’re hosting, you may consider paying more than that and therefore help keep Uberspace a place that’s affordable for everyone. If you can’t afford more than €1, that’s fine, too.

Just be sure to play fair and pay what you think is right.

Send email attachments to ownCloud

Three years have passed since my article about sending email attachments to Dropbox. A lot of stuff happened since then. For instance, since the end of safe harbor, we don’t trust U.S. based cloud providers as much as we (maybe) used to.

So, here’s an update on how to automatically save mail attachments (e.g. invoices and receipts) in a specific folder in ownCloud. The basic idea is to set up a billing@example.com email address and store all received PDF files in ownCloud.

So here, we go. We’re still using Uberspace for this because we still like it – even after three years. But again, you should be able to adapt this for any Linux based setup.

Configuring qmail and reformime

At Uberspace, you get an unlimited number of email addresses out of the box. Your primary is composed like this:

username@hostname.uberspace.de

Where username is your Uberspace username and hostname is the host your account is hosted on. What happens to emails coming this way is governed by a small file named ~/.qmail. In much the same way, you can use any email address that follows this format:

username-foo@hostname.uberspace.de

Where foo can be anything you like. To specify what should happen with emails coming in via this address, you can create a file called ~/.qmail-foo.

So, for instance, if you want all email PDF attachments sent to peter-owncloud@phoenix.uberspace.de to appear in ownCloud, create a ~/.qmail-owncloud file with the following content:

| /usr/bin/reformime -X /bin/sh -c "if [ "\${FILENAME#*.}" == "pdf" ]; then curl -X PUT -u username:password \"https://owncloud.example.com/remote.php/webdav/email-inbox/$(date +%Y%m%d%H%M%S)_\$FILENAME\" --data-binary @- ;fi"

Yep. That’s one single line. It uses reformime to extract all file attachments and then uploads those that end in .pdf to the email-inbox/ folder in your ownCloud.

Of course, it will be a good idea to create a separate user in your ownCloud which only has (create only) access to your email-inbox/ folder and use its credentials for the curl above.

Other than that, you’re all set. Now redirect your billing@example.com address to peter-owncloud@phoenix.uberspace.de and use that for services that send you invoices!

Jumping to JavaScript code definitions on click in vim with tern.js

If you develop JavaScript a lot and do not know tern.js, you should definitely check it out: Tern.js anaylzes your code and enhances your various editors with code completion, function argument hints, refactoring and more goodies.

If you use vim and want to use Tern’s ability to jump to a variable or function definition by simply Ctrl-clicking on it, just create a file called ~/.vim/ftplugin/javscript/ternclick.vim and add this line to it:

:nnoremap <buffer> <C-LeftMouse> <LeftMouse>:TernDef<CR>

By limiting this functionality to buffers with filetype javascript you can still use other ctrl-click plug-ins (or built-ins) like ctags.

VPN (IPsec) tunnel between a pfSense 2.0 router and a FRITZ!Box

We have a pfSense 2.0 router at our coworking space which is hooked up to a pretty fast VDSL line so I thought it would be a fun idea to connect my home network (where I’m using a FRITZ!Box 7390) to the work LAN using a secure and permenent VPN tunnel.

Doing a quick Google search yields results for the 1.2 version of pfSense which is outdated and does not use DynDNS hostnames for both ends, so I did a quick writeup of my own.

Prerequisites

First things first, create permanent hostnames for your pfSense and your FRITZ!Box. If your DSL provider has assigned permanent IP addresses, that’s fine. If they didn’t you’ll probably need something like DynDNS. Last time I checked, you could still get free accounts, otherwise it’s just a few bucks a year – probably a good investment. You’ll need to configure both the pfSense and the FRITZ!Box to update your DynDNS hosts whenever their IP address changes, but that’s pretty straight forward so I won’t cover it here. Fun fact: you can add CNAME records to your company domain pointing to your DynDNS host, so it looks even more professional. We use vpn.launchco.com for instance – how cool is that?

You’ll also need two different primary subnets for your networks, i.e. if your home network lives in 192.168.178.0/24, which is the standard network a FRITZ!Box uses, your work network has to use something else, like 192.168.1.0/24, which is by the way the standard that pfSense uses – so you should be safe if you’re like me a big fan of sticking with sensible vendor defaults.

Now, with the permanent hostnames and subnets in place, let’s get down to business.

Setting up pfSense

We’re using IPsec, so let’s head to VPN -> IPsec first and click the [+] icon on the bottom right to add a new phase 1 entry.

Fill the form in accordance to what you see on the following screenshot:

Screenshot of pfSense configuration phase 1 entry

Obviously, replace your-fritz.dyndns.org with the permanent hostname assigned to your FRITZ!Box as well as your-pfsense.dyndns.org with the one on your pfSense box. The Pre-Shared Key should be a long random string. Don’t worry, you won’t have to remember it. You’ll just save that in the FRITZ!Box later and then you can forget about it.

Next up, we need a phase 2 entry. For that, click the [+] icon next to a label that says Show 0 Phase-2 entries and fill the form like below:

Screenshot of pfSense configuration phase 2 entry

Here, you just need to make sure that you replace 192.168.178.0 with the actual subnet your FRITZ!Box uses. Again, if you’ve sticked with the default when setting up the box, this setting should be right for you.

That should be it for the pfSense. After saving it’ll probably ask you to apply or reload the configuration. This is safe to do now.

Setting up the FRITZ!Box

Now, let’s finish this by configuring a VPN entry in your FRITZ!Box. From my perspective, this part is much easier, because I’m just pasting code instead of fiddling with screenshots – yay!

Fire up your favorite text editor and paste the following code:

Make the necessary modifications according to the comments in the file. Then, open the FRITZ!Box configuration interface in your browser and head to Internet -> Freigaben -> VPN, use the browse button to select the file you just created and click on VPN-Einstellungen importieren.

That’s it – you’re done. In my first trials I had to go back to the pfSense interface and navigate to Status -> IPsec to click on a small [>] (“play”) button to get things rolling. Maybe you need this, maybe it just works without it.

Getting the connection up after a restart of either of the two routers sometimes fails which is most probably due to the fact that DynDNS updates have not yet propagated when the VPN tries to connect. In this case, just be patient; both boxes will keep retrying to open VPN connections and you can always stop/start on both ends yourself. Once a connection is made, the tunnels are usually stable and rock-solid. Enjoy!

How to build an 8 TB RAID5 encrypted time capsule for 500 Euros

So I wanted to buy a NAS that can act as a time capsule for Apple computers and run a proper Linux at the same time. I also wanted to be able to run the occasional Windows or Linux VM and I wanted to have a lot of storage. As I knew the thing was going to be in our coworking space, it also needed to have disk encryption.

Here’s how I built this for just under €500.00 using standard components and free open source software.

Selecting the hardware components

I found the HP ProLiant MicroServer (see Review and more Picures) to deliver great value for the price. At the time of writing, you can buy it for €209.90 if you’re in Germany like me.

The N36L (which I bought) comes with a single 250GB hard drive which obviously did not meet my “a lot of storage” requirement. So I bought 4 identical Seagate Barracuda Green 2000GB SATA drives which would add another €229.92 to the bill if you bought them today. I am not an expert in hard drives, but the Seagate Barracuda brand was familiar and “Green” sounds good as well.

If you don’t want your new server to host virtual machines at some point, you can probably get out your credit card and check out right now. If you’re like me though, you’d add another 2 bars of 4GB Kingston ValueRAM PC3-10667U CL9 (DDR3-1333) to your cart. The two of them together are just €44.24, so it’s no big deal anyways.

All components together will set you off €484.06. The rest is based on open source software (Debian mostly) which is free as in beer. More about that after the break.

Continue reading →

iPad Safari Bug: Touching iFrames

I didn’t want to buy an iPad. But in a recent project for one of our clients we had to optimize a page for the iPad. And that is why I just bought one to fix some bugs we had with that page. After some minutes of searching I came up with a nasty browser bug.

The page we developed contained a slider and an image that you can drag around. Both were implemented using the typical touch events that you know from mobile Safari: touchstart, touchmove, touchend, touchcancel. The page worked on my iPhone and on my iPad as well. But our client put the page into an iFrame in order to include it into his website. And that didn’t work.

After some investigation I discovered that the touch events worked in some situations, but not everywhere on the dom element. It turned out that the position of the iFrame has an impact on the area where the touch events work. Let’s say the iFrame is located 100 pixel below the documents top, then the touch events work anywhere on the image except for the lower 100 pixel. From what I know about browsers my guess is that the ‘cursor’ position is not calculated correctly when passing a touch event into an iFrame.

To get a better understanding, please try this little bug demo on your iPad or iPhone. The sources of the iFrame content look like this:

<html>
  <body>
    <div ontouchstart="alert('touched');"
        style="position:absolute;top:0px;width:200px;height:200px;background:yellow;"></div>
    <div ontouchstart="alert('won't be touched');"
        style="position:absolute;top:200px;width:200px;height:200px;background:red;"></div>
    <div ontouchstart="alert('touched only in upper half');"
        style="position:absolute;top:0px;left:200px;width:200px;height:400px;background:orange;"></div>
  </body>
</html>

And the iFrame sources:

<html>
  <body>
    <iframe style="position:absolute;top:200px;"
        border="0" src="iframecontent.html" width="400" frameborder="0"
        height="400" scrolling="auto"></iframe>
  </body>
</html>

I reported the bug to Apple, but since their bug tracking is not very motivating, I might add the bug to webkit as well. (I didn’t even get an email telling me that the bug was reported.)

JavaScript Testing and Continuous Integration Part II

In my previous post I described how to use the YUI Test Framework to write JavaScript tests for your web application. This part continues with how to run the tests in different browsers and how this can be used with a continuous integration server.

Executing your tests in different browsers

A framework that is very famous when it comes to browser testing is Selenium. It automates the execution of click streams in different browsers using two major components: the selenium IDE and the selenium remote control. The IDE is a Firefox plugin for recording the click streams. Originally, the click streams are written as plain HTML tables, but over the time selenium added drivers for the most popular programming languages. This way, you may record a click stream and have it transformed into e.g. Java / JUnit, where you then can add several assertions or high level programming constructs. A simple selenium JUnit test may then look like this:

public class MyTest extends SeleneseTestCase {
	protected Selenium selenium;

	public void setUp() {
		selenium = new DefaultSelenium("192.168.1.111", 4444,
                                       "*firefox", "http://you.web.server/");
		selenium.start();
	}

	public void testSomething() {
		selenium.open("http://you.web.server/path/to/your/tests");
		selenium.windowMaximize();
		selenium.windowFocus();
		// wait for...
		selenium.isVisible("unittestsFinished");
	}
}

Having the test, the selenium remote control come into play. It is used to open a browser, inject some JavaScript code that triggers all the desired events, and click through your app. In our case we use selenium to open a firefox, load the HTML page containing our YUI tests, and wait for them to be finished. (The wait is not shown in here, you have to write it yourself.) When our tests finish, we write an invisible <div> containing the string ‘unittestsFinished’ which indicates selenium that the browser may be closed.

Iterate over your browsers

Now that we have selenium to trigger our JavaScript tests we still need to iterate over all browsers. We also need to trigger selenium somehow. Since the continuous integration server at pidoco is a hudson server, and hudson can execute an ant build script, we use ant to do the job:

<property name="browsers" value="firefox,safari,iexplore"/>
<for list="${browsers}" param="browser" delimiter=",">
	<sequential>
		<junit haltonfailure="false" dir="antbuild" fork="yes"
                               showoutput="yes">
			<sysproperty key="browser" value="@{browser}"/>
			<formatter type="xml" />
			<test name="tests.selenium.SeleniumYUITestsLauncher"
                              todir="tests"
                              outfile="@{browser}-selenium-result.xml"/>
		</junit>
		<get src="http://your.test.server/path/to/junitxml"
                     dest="tests/TEST-@{browser}-yui-result.xml"/>
		<replace file="tests/TEST-@{browser}-yui-result.xml"
                         token="testsuite name=&quot;"
                         value="testsuite name=&quot;@{browser}."/>
	</sequential>
</for>

Ant iterates over a list of browsers that we want the tests to be executed in. It then triggers the selenium test via JUnit. The test result in the *selenium-result.xml is not very interesting since it is only one test which opens the yui tests. Using a system property ant tells selenium which browser it should open.

Next, we download the reported YUI test result from our test server to the integration server. You may also use selenium to read the result from the HTML page directly and save it to a file within the JUnit test. However, we had a test report server from the beginning, so we used that to get the results from the browser to our integration server. Once we have the xml file we use a regular expression to add a fake package to the test suite such that the integration server can tell us in which browser a test may be failing.

Continuous Integration

The only thing that is left for the continuous integration server is to call the ant target and read all the xml files in the ‘tests’ folder as JUnit test results. This is pretty easy but allows you to analyse your JavaScript tests, which were executed automatically in different browsers, with all the power of the continuous integration server of your choice.

JavaScript Testing and Continuous Integration Part I

Adding your Tests into a continuous integration system for test automation is quite common among professional software development. However, when it comes to testing JavaScript code, many people lack the experience and best practices to set up a productive infrastructure. We at pidoco have invested quite some time to solve this problem and came up with the setup I describe in this blog post.

The JavaScript Testing Framework

There are countless frameworks out there to use for testing JavaScript code. And I have to admit it is quite confusing to find the best for one’s project. From what I saw up to now, many frameworks look very similar to each other. They do differ in some details, which make people like one or the other. To us, it was important to have a comprehensive documentation of the frameworks capabilities. Therefore, we chose the Yahoo UI test framework. It comes with an extensive documentation, just as you know from the Yahoo UI framework. You can transform a JavaScript Object into a test case, which makes it looking quite similar to JUnit tests:

var testCase = new Y.Test.Case({

    name: "TestCase Name",

    //traditional test names
    testSomething : function () {
        //...
    },

    testSomethingElse : function () {
        //...
    }
});

Within your test functions you have a broad variety of assertions that you can use. In addition, you can use the wait() and resume() functions to interrupt a test and wait for asynchronous events to happen. This is very useful if you need to wait for some UI components to be rendered or some server calls to return. Next, you can also trigger dom events using the YUI Event object:

Y.one("body").simulate("click");

Once you have written your first test case you want to execute the test and review the results. YUI Test comes with a nice test result viewer that you may include into your test site. On the other hand, the tests should be executed in several browsers and you may want to review the results on just one page. Therefore, the framework offers a function to send the result to a URL where a server may collect and show the results. In our first attempt we sent the result as a JSON string to the server, who transformed the collected results in a simple table with the tests in rows and the browsers in columns.

var reporter = new Y.Test.Reporter("http://www.yourserver.com/path/to/target",
                                   Y.Test.Format.JSON);
reporter.report(results);

Since many continuous integration servers like to read test results in an xml file, YUI allows for several formats to get your results:

  • Y.Test.Format.XML (default)
  • Y.Test.Format.JSON
  • Y.Test.Format.JUnitXML
  • Y.Test.Format.TAP

Oh, and if you asked yourself ‘where the heck is this Y coming from?”, here is the solution:

var Y = YUI();
Y.use('test');

Being able to write a JavaScript unit or integration test the next question is coming up:

How to test your web app

Following a test driven development a lot of JavaScript has to be written just for the tests. When deploying the web app into the productive environment, we don’t want to include the testing code. But how to separate testing code from productive code? We found two solutions to be possible. First, we use a plugin that loads, triggers, and reports the tests, but is only included into the system during development and test. This way, we simply disable the plugin and have a productive system without any testing code. However, not every web app has a plugin system that supports this solution. Therefore, we have thought of a second way to test your app: deploy a second app that contains a simple HTML page which loads the testing code plus an iFrame containing the web app to be tested.

Please have in mind the same origin policy for the HTML page containing the iFrame and the content of the iFrame. If both are delivered from the same domain, the outer page may access the JavaScript code of the iFrame content. This enables us to call the productive code from the testing code. In case your code either of the application or the test checks for e.g. instances of Array, it fails if the array was created inside the iFrame and checked outside or vice versa. You may create an array using new window.frames[0].Array() and test for an array with a instanceof window.frames[0].Array.

Using the iFrame approach you may enhance the HTML page such that it loads you app and the test code, automatically runs the tests, and listening to the Y.Test.Runner.COMPLETE_EVENT to upload the test result to your reporting server.

How the tests are triggered in different browsers and how this goes with continuous integration is subject of the second part of this blog post.

SEO-friendly Affiliate Cookies powered by mod_rewrite

So you want to run an affiliate or partner program, like for example the Planio Partner Program. Good idea. Happy customers who recommend your service to their friends are the best marketing you can get. Why not reward your customers and make them even happier?

From a technical point of view, an affiliate program is nothing fancy at first glance:

  1. give your customer a link with a unique token
  2. once a visitor signs up, check if a token is present and look up the respective customer
  3. reward them!

However! There’s some technical pitfalls.

Keep track of the token

This is a rather easy one: of course, you have to remember the token throughout entire visits. You can’t expect a visitor to turn into a paying customer right on the first page. They will check out your site, visit a couple of pages, and maybe even come back another day to buy your product. You still want to reward your affiliate, so cookies will be your single option.

Don’t mess with Google

We’ve learned this the hard way with Magpie and it took us quite some time to recover our page rank, so be sure to read this! Google does not like duplicate content. If you’re copying what others write on the Web or if you have a lot of pages with similar or even identical content, Google’s algorithms will classify your site as spam. What does this have to do with your affiliate program? Well, all those referral links are different because of the token, yet they will most certainly render the same content.

So what can you do? Redirect. Don’t let your app render a page if the request URI contains an affiliate token. Redirect to the actual page using status code 301 (moved permanently). This way, Google will know that the link is still valid (and thus you will get most of the link juice from referring sites), but that its location has changed.

How to implement?

For a long time, we did this within in our application. Rails makes it really easy using before_filter, so it’s no big deal. However, your setup may be more complex. Maybe you have multiple apps on subdomains or sub-URIs and maybe they run on different frameworks. Just think of your corporate blog, most of the time it’s a WordPress. But you’d still want to reward your affiliates if they send you traffic via a link to a great blog post you’ve written, right?

For Planio, we moved the redirection and cookie part to the Web server. Below is a short and sweet Apache config snippet which works really well for us:

# affiliate cookie
RewriteCond %{QUERY_STRING} (.*)ref=([a-zA-Z0-9]{6})(&(.*))?
RewriteRule ^(.*)$ $1?%1%4 [CO=affiliate:%2:.plan.io:43200,R=301,L]

It does everything for us, so our apps don’t have to worry:

  • detect a token in a request URI (we use a ref= query param with a 6 character token)
  • set a cookie named affiliate using the token value which is valid for all our subdomains and for 30 days
  • redirect to the same page using 301, removing the ref parameter and keeping all other query parameters intact (this is great for other tracking stuff, like the params you can generate for Google Analytics)

In the end, we just need a one-liner in our signup code that reads the cookie, finds the affiliate and associates the affiliate with the newly created account.

Update: Thomas points out that you could tell Google to ignore certain query params and avoid 301 redirects using canonicals. He also claims that Google would be my friend. Not so sure about the last one, though 😉

I hope this was useful to you. Do you run affiliate programs for your products? What are your experiences? How did you implement them? I look forward to your thoughts in the comments!