Category Archives: Coding

Coding

Firefox is worth using again

When Chrome was first released, there was a huge amount to like about it - it was incredibly fast, extensions were much simpler to make than Firefox, and it was based on some already good work from the webkit that Apple had been working on, and the UI was full of lovely little touches, like putting tabs on the top of the browser's chrome to save space, or the clever little bookmark bar that only appeared on when opening a blank window, that really showed that the people building it cared about the details.

Firefox by comparison with each version seemed to get slower and more cumbersome, but the sheer utility of Firebug kept it in the typical web developer's toolbox, until Chrome's own developer tools reached the point where Firebug was no longer needed.

And about a year ago, it did seem like Firefox was starting to lose the plot - they had announced an incredibly aggressive 3 month release cycle that forced people to actively redownload the app all the time, breaking all their plugins each time, the shockingly talented Asa Raskin left the organisation, and in general, it became harder and harder to find reasons to click on the that Firefox logo when Chrome did the same job so much faster, and without forcing you to jump through endless upgrade hoops.

That said, there was a lot to like about Mozilla's labs projects. Weave felt years ahead of it's time, Prism, was handy and quite clever and Tab Candy (now Panorama) as a concept was ingenious enough to have me using crashy as hell alpha builds just to be able to use the feature, and the when I first came across it WebGL work totally blew me away.

Coming back

Last week, I met @cyberdees at the fantastic MonkiGras, and after talking to him, I figured it was worth trying out Firefox again, I switched back yesterday, and on the whole, I'm really impressed. It's feels about as fast as Chrome does now, and I find the combination of App Tabs, Panorama, and the way Firefox will make it easy to select an existing tab rather than open yet another browser tab pointing at the same gmail account or the same page in github is exactly the kind of flourish that made me enjoy using Chrome when it was first released.

Giving it a week

I'll be using Firefox for browsing at home, and as my main browser for development this week.

After watching this new demo of their new dev tools that are built in:

I think it's safe to say the gap with Chrome has closed again, and I'm really looking forward to using them in anger this week.

For the benefit of other possible switchers, I'll write a short post this weekend, reviewing the coming days of Firefox based web development again.

Quick note working with child themes and WordPress

Continuing my adventures in the world of WordPress theming today, I stumbled across a minor gotcha when working with Child Themes and writing code using functions.

When you're using Dimas Begunoff's WPAlchemy framework for making metaboxes, the code examples refer to creating a sample meta box like so:

 
  $custom_metabox = new WPAlchemy_MetaBox(array
  (
    'id' => '_custom_meta', // underscore prefix hides fields from the custom fields area
    'title' => 'My Custom Meta',
    'template' => TEMPLATEPATH . '/custom/simple_meta.php',
  ));

This won't work with child themes, because the TEMPLATEPATH will be pointing to the template path of the parent theme. As all your code is (or should be...) in the child theme, you won't be able to get to the template.

You need STYLESHEETPATH

If you're using a child theme, you'll need to use a different constant instead, called STYLESHEETPATH, which rather confusingly gives you the equivalent path, for the child theme:

 
  $custom_metabox = new WPAlchemy_MetaBox(array
  (
    'id' => '_custom_meta', // underscore prefix hides fields from the custom fields area
    'title' => 'My Custom Meta',
    'template' => STYLESHEETPATH . '/custom/simple_meta.php',
  ));

Found via nabble.

How to use get Git, Things and Dropbox to work together

I've started using Things as a way to keep track of Things I've committed to doing, or should be doing, and I'm a big fan: it syncs nicely with an iPhone, and it's full of loads of lovely UI touches, like automatically loading the url, or email message URI you're looking at into the notes section when making a new to do item, or providing well thought out keyboard short cuts, or simply having a "Today" mode designed let you set targets just for today, and forgot about the rest of your ever growing todo pile.

There's one shortcoming I've found though, and that's using Things with more than one computer - it isn't supported by out of the box, and using Dropbox, my normal way of sharing content between two computers, doesn't play too nicely with Things by default either.

However, on Cultured Code's own wiki there there some handy instructions for using Git to allow multiple instances of Things to share a single tasks database, but setting up a remote git server just to share todo's across computers seems like a git of an overkill - instead, I've adopted them to work with Dropbox, using a simple directory to share repos, instead of using a whole separate server.

In this post, I'll outline, how to set this up for yourself.

What we're going to do here

This is a fairly lengthy post, so I'll outline what we're doing first:

a) setup an a Git Repo in Dropbox that we push updates to.

b) Set up a clone of this repo on computer where the Things todo database is normally stored, and track changes in there.

c) Setup something like a cronjob using OS X's equivalent, launchctl to commit the changes every half hour, and push them to the main Git repo on dropbox.

Of we go then...

Setting up our Dropbox Git Repo

First of all, we setup our repository in Dropbox like so:

  mkdir -p ~/Dropbox/Git-Repos/Things.git

It should look something like this in the finder:

Now we have a directory for our git repo, we need to initialise it, using a special flag, --bare. This creates a repo that's ready for us to push code to:

  cd Things.git
  git --bare init

Congrats, that's step one done!

Cloning the repo into we we put our Things

Now we have somewhere to push code to, lets start pushing code. We're going to into the directory where Things data is stored, initialise a repo, and make our first commit:

  cd Library/Application\ Support/Cultured\ Code/Things
  git init
  git add .
  git commit -a -m "initial commit"

Okay, we've made our repo for Things, but we still don't have a way of getting content to the Dropbox repo, which is our canonical repo that we'd like to push code to. We need to tell git where the remote to push code to.

Normally with git, you'll push code to another server, but you can just as easily just push to another directory using git - this lets Dropbox handle the hard work of giving us distributed backups, and making this repo available to other computers:

  git remote add dropbox ~/Dropbox/Git-Repos/Things.git
  git push dropbox master

Okay, so we've now created a central repo, we've created a local repo where we store our Things, and we've pushed our first chunk of code there too. Now we just need to automate this process, so we don't have to think about it in future.

Making this run on autopilot

For this, we turn to Launchd, a tool on the mac to run commands automatically in the background, somewhat like a a newer version of cron. It's what starts all your programs on boot, but it's also used to run certain tasks at certain times of the day on a schedule.

What we're going to do here is use it to run a shell script every 30 minutes when the computer is awake, to commit the latest state of the file storing our Things data locally, then push that changeset to the main repo on Dropbox. Paste this code into a file called thingssync.sh.

  #!/bin/bash
  DATE=`date`
  REMOTE='dropbox'
  cd ~/Library/Application\ Support/Cultured\ Code/Things
  git pull $REMOTE master
  git commit -a -m "Auto Sync - $DATE"
  git push $REMOTE master

The git pull line is there to keep this in sync with other instances of Things. The git commit line will aways give us a unique commit message, and the git push, makes sure we are pushing to the right branch.

Next, we need to breathe life into this script, by making it executable. We use a sometimes arcane unix command here, chmod (our +x part of the command gives it an executable bit, allowing it to be run by itself):

  chmod +x thingssync.sh

We need to put this newly enlivened script into a place where our launchctl daemons can actually access it, so let's put it into the user directory usr/local/bin - I tend to prefer this as a place for storing executable binaries than /usr/bin, because a) it means we don't have to start using sudo to make future changes, and b) it should be clearer that this isn't a core binary that would be clobbered by a software update from Apple.

  cp thingssync.sh /usr/local/bin/

The final steps are creating a launchctl config file known as a property list, and putting it where the launchctl daemons normally look for config files, and then loading it in to launchctl to start running our cronjobs:

First paste this code into the property list file (more commonly referred to as a plist file) called com.$(hostname).ThingsSync.plist:

 
  <?xml version="1.0" encoding="UTF-8"?>
  <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
 
  <plist version="1.0">
    <dict>
      <key>Label</key>
      <string>com.twelvestone.ThingsSync</string>
      <key>Program</key>
      <string>/usr/bin//local/thingsync.sh</string>
      <key>RunAtLoad</key>
      <true />
      <key>StartInterval</key>
      <integer>1800</integer>
    </dict>
  </plist>

Then put it into the directory set aside for loading plist files, ~/Library/LaunchAgents:

  mkdir -p ~/Library/LaunchAgents
  cp com.$(hostname).ThingsSync.plist ~/LaunchAgents

Our last step before we're finished, is to load the plist file in launchctl, so it knows of its existence, without use needing to reboot the machine:

  launchctl load ~/Library/LaunchAgents/com.$(hostname).ThingsSync.plist

Once this is done, we should now have our Things data being committed and pushed using git, every 30 mins while the computer is awake. If we want another machine set up work with this, we can clone from the repo in dropbox, follow steps 2 and 3, and at last, have multi-mac sync using Things.

Discovering the jQuery data method

Every now and then you come across a trick in web development, or a particular method in a library's API that makes you wonder how you bumbled along without it, and totally changes how you work in future.

That happened today foe me when I discovered the data() method in jQuery.

In short, it lets you attach arbitrary data to elements you've referring to in the DOM, without resorting to gruesome hacks like coming up with bogus rel=foo attributes to store data relating to specific dom elements.

For example lets say we have an list of people below in unordered list

1
2
3
4
5
6
7
 
    <ul class="fave_streetfighters">
        <li id"ryu">Ryu</li>
        <li id"ken">Ken</li>
        <li id"chunli">Chunli</li>
        <li id"zangief">Zangief</li>
    </ul>

Now lets say I have some some data here I want to add to these characters, freshly delivered by an ajax call:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
 
    data = [
        {   "name":"Ryu",
            "sex":"male",
            "super_move":"hoofing great fireball",
            "likes":"pyjamas"
        },
        { 
            "name":"Ken",
            "sex":"male",
            "super_move":"big fiery jumping uppercut",
            "likes":"being arrogant"
        },
        { 
            "name":"Chunli",
            "sex":"female",
            "super_move":"hundred foot kick",
            "likes":"spiky bracelets"
        },
        {
            "name":"Zangief",
            "sex":"male",
            "super_move":"spinning pile driver",
            "likes":"tight red pants"
        }
    ]

Instead of embedding this data in each list item by faffing around with weird classnames, or made up attributes like this, which messes the DOM up something horrible:

    $('li#ryu').attr('made_up_attribute_for_super_move', data[0].super_move);
    $('li#ryu').attr('made_up_attribute_for_likes', data[0].likes);

... the cleaner way to add this data to the corresponding element is so use the data{} method like so:

    $('li#ryu').data('profileInfo', data[0]);

This lets me associate the two, and fetch the data in future easily, with a related data() call:

    var cowardly_finishing_move = $('li#ryu').data('profileInfo').super_move;
    // returns "hoofing great fireball"'

jQuery internally uses this data() method, which makes it extremely fast, and it's been in the API for ages, but it's something that's easy to overlook if you don't see it in use in someone else's code, or read the API yourself.

Or, if like me, you didn't read this excellent post here by Marc Grabanski back in 2008.

Oh well, better late than never, right?

How I am debugging Drupal Views

I've been working with Drupal a lot lately, and while there are lots of reasons to like it, ever now and then stumble across some ridiculously frustrating idiosyncrasy, that makes me want seriously consider working with the web professionally. I've documented the process of debugging a recent view that took about a day in total of head scratching, swearing, and general unhappiness, so I can refer to it in future, when I'm next battling with views, because I really don't ever want debugging to be this frustrating ever again.

The most recent time suck on mine has been working out why one view provided by the heartbeat module, was outputting content totally differently to how the rest of the site was. Here's the html I'd normally expect to see:

1
  <a href="http://project.work.local/node/166">joe bloggs</a> has <a href="http://project.work.local/node/166">added the page Multimedia Gallery</a>

Here's what was being generated.

1
  joebloggs [1] has added page Multimedia Gallery [2]. [1] http://project.work.local/users/joebloggs [2] http://project.work.local/node/166

Chucking that string into google brought lead me to the drupal function drupal_html_to_text, which you'd normally use to sanitise text before emailing people, but this function didn't seem to directly crop up in either the views code, nor the heartbeat code.

Running a call to ack to look for any occurrences of this string didn't help - I'd normally expect to see this function somewhere, in the two modules, but that brought up nothing.

Even throwing an exception before the variable was generated didn't show me where the text was being changed.

There's no logging that I know of to let me trace a request from hitting the server to coming out the other end to see what functions are touching it.

I was stuck.

Why was this happening?

Eventually, I found out that I had the default input format set up wrong, which was the cause of all this pain. In this file here in the heartbeat module, heartbeat/views/heartbeat_views.views.inc, the view was reconstructing the output, basing it on what the default filter was, and handing it over to the views_handler_field_markup file for reformatting:

108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
 
  // Heartbeat activity table
  $data['heartbeat_activity'] = array(
 
      // Table to join
      'table' => array(
 
        'group' => t('Heartbeat activity'),
 
        'base'  => array(
          'field' => 'message_id',
          'title' => t('Heartbeat activity messages'),
          'help'  => t("All activity logged by heartbeat"),
        ),
        /* 'join' => array(
          'heartbeat_messages' => array(
            'left_field' => 'message_id',
            'field' => 'message_id',
          ),
        ), */
      ),
 
  //  snip
  // pass content through the input filter before displaying it 
  'field' => array(
    'handler' => 'views_handler_field_markup',
    'format' => FILTER_FORMAT_DEFAULT,
  ),

In this views/handlers/views_handler_field_markup.inc file, the text was being reformatted using the render function:

28
29
30
31
32
33
34
35
 
  function render($values) {
    $value = $values->{$this->field_alias};
    $format = is_numeric($this->format) ? $this->format : $values->{$this->aliases['format']};
    if ($value) {
      return check_markup($value, $format, FALSE);
    }
  }

... and herein lies the problem. My default format here was no longer filtered html, it was messaging plain text. If you see my defaults, you'll see why the links were being converted:

How to solve this problem

There are two ways you can solve this -

1) You can change the default input format to allow links in the first pace. This keeps control in the database which works great for site builders who want to change content through the views UI.

2) You an override the template with code, and put it in source control.

Solving this in the database

In this case, our output finally ends up on our page via the default template views-view-field.tpl.php inside the views module which is visible below. The important variable to bear in mind here is $output, the result of all the prepocessing we define through the views UI, and whatever other part of Drupal decides to chime in in how it thinks the content should be rendered, like our input formatters.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
  <?php
  // $Id: views-view-field.tpl.php,v 1.1 2008/05/16 22:22:32 merlinofchaos Exp $
   /**
    * This template is used to print a single field in a view. It is not
    * actually used in default Views, as this is registered as a theme
    * function which has better performance. For single overrides, the
    * template is perfectly okay.
    *
    * Variables available:
    * - $view: The view object
    * - $field: The field handler object that can process the input
    * - $row: The raw SQL result that can be used
    * - $output: The processed output that will normally be used.
    *
    * When fetching output from the $row, this construct should be used:
    * $data = $row->{$field->field_alias}
    *
    * The above will guarantee that you'll always get the correct data,
    * regardless of any changes in the aliasing that might happen if
    * the view is modified.
    */
  ?>
 
  <?php print $output; ?>

After losing a day hunting down the source of this display issue, by spelunking through lot of Views and Activity contrib module code, countless blogposts and confusing views documentation, I think this is a terrible idea, especially if you're developing a website and you're already using source control, and you value consistency and simplicity.

Solving this in code

The other way to solve this problem is to use an overriding template that the views UI is considerate enough to suggest the name for when settig up a view in the first place, and also let it generate a handy view template too. The output presented should look something like this:

This text is rendered using the render format here in views-view-field--heartbeat-activity--block-1--message.tpl.php

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
  <?php
  // $Id: views-view-field.tpl.php,v 1.1 2008/05/16 22:22:32 merlinofchaos Exp $
   /**
    * This template is used to print a single field in a view. It is not
    * actually used in default Views, as this is registered as a theme
    * function which has better performance. For single overrides, the
    * template is perfectly okay.
    *
    * Variables available:
    * - $view: The view object
    * - $field: The field handler object that can process the input
    * - $row: The raw SQL result that can be used
    * - $output: The processed output that will normally be used.
    *
    * When fetching output from the $row, this construct should be used:
    * $data = $row->{$field->field_alias}
    *
    * The above will guarantee that you'll always get the correct data,
    * regardless of any changes in the aliasing that might happen if
    * the view is modified.
    */
  ?>
 
  <?php print $row->{$field->field_alias}; ?>

The important change here now is that by default, we're not printing $output to the screen, but $row->{$field->field_alias}, which as the documentation tells us, is the content before it's been messed with by the input filters and such like. With direct control to the $row result, and its attributes, we finally have a degree of control over our layout, like we would be used to if using any other tool I'm more familiar with, like Rails, Django, or WordPress.

The real watershed moment with this bug came when I gave up looking on Drupal.org, and used stack overflow to see if anyone else had had a similar problem, and following links from an issue that looked very close to mine.

Hopefully this will help someone else losing their hair when working on a drupal project, and help explain how this frustrating framework decides serve content to users using views.

This entire development process would be made so much easier if Drupal had an option to log the path through the framework a request takes, like how Rails does, so you can see which templates are being called, which queries are being made and so on.

Surely there's a way to do this do you can see what Drupal is actually doing under the hood, instead of making so many semi-educated guesses like I had to do here?

There’s no need to type your password when you restart Apache, really…

When you're developing with PHP on a mac, if you're not using MAMP, you'll often end up having to do a lot of manual restarts when you make changes to how you've set up Apache (assuming you haven't joined all the cool kids and moved onto Nginx yet...). This usually involves calling up a terminal window and typing in the usual Apache restart command on OS X:

  sudo apachectl restart

This isn't a really destructive command, and having to type in your admin password every time when doing this in development on your own computer gets old quickly. It's also error prone. Surely there's a better way?

Fortunately, when browsing the Aegir OS X install documentation, I came across as handy fix to this problem. The Aegir hackers let Aegir handle server restarts in a fairly elegant fashion, by tweaking the sudoers file on your mac, which is basically a short list of who is allowed to do what on your machine. I've borrowed a few tricks, and adaprted them to use in my sudoers file here, and after showing it in full, I'll explain how it works.

Bear in mind, you can't edit the sudoers file directly - you need to use the visudo command, (this works as a precaution to stop this file getting screwed up by letting more than one person is edit it at a time for example).

Also, to make things more complex, you need to edit this inside the terminal, to you may need to force this by typing EDITOR='vim' first

Okay, now that's out the way, lets look at that file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  # Run as alias specification
 
  # User privilege specification
  root    ALL=(ALL) ALL
  %admin  ALL=(ALL) ALL
 
  # Uncomment to allow people in group wheel to run all commands
  # %wheel        ALL=(ALL) ALL
 
  # Same thing without a password
  # %wheel        ALL=(ALL) NOPASSWD: ALL
  %staff          ALL=(ALL) NOPASSWD: /usr/sbin/apachectl
  # Samples
  # %users  ALL=/sbin/mount /cdrom,/sbin/umount /cdrom
  # %users  localhost=/sbin/shutdown -h now

Lets look at the first lines, with root and %wheel. If you're even bothering to read this, the chances are you know that root refers to the all powerful user that can do anything on a system, but you may not be familiar with the percent prefix on %admin nor the ALL=(ALL) ALL. The %admin basically means 'anyone in the admin group, but the ALL=(ALL) ALL is somewhat more cryptic. The rough translation goes like this though:

from ALL terminals, let these users run ALL commands and as ALL of the users in the system.

We see the same trick visible again with the %wheel group, but the line starting with %staff deserves more attention:

  %staff          ALL=(ALL) NOPASSWD: /usr/sbin/apachectl

Translated, this means:

for ALL members in the staff group, let them use ALL terminals, to run the command /usr/sbin/apachectl as ALL users (in particular, the root user) without needing a password (that's the NOPASSWD: bit).

This is the line that lets us run the familiar sudo apachectl restart without needing to constantly type our password credentials in.

Which over the course of a year, will easily save you tons of typing over the year, and leave some time to skim the sudoers man page, and suggest a similar trick here for others to try.

Over to you now...

How to get back into Drupal site if you’ve locked yourself out

A few posts back, I shared a one liner to get you back into a WordPress site if you manage to lock yourself out, and forget your database password.

Assuming you've access to the command line and drush, you can pull a similar trick with Drupal, by typing the following query in:

    drush sql-query "update users set pass=md5('NEWPASSWORD') where uid = 1;"

What's happening here?

The first thing we're doing is calling drush sql-query, a sub-command of drush.

If you haven't used Drush yet, you really, really should. It totally transforms how you work with Drupal, by making the kinds of tasks you had to do manually through the website possible from a commandline, which means yes, you can get up to all kinds of handy scripting shenanigans.

As you might expect drush sql-query lets you pass a single arbitrary query to the database described in your site's settings file, without you needing to fish the credentials out yourself. Here's our query too now:

 update users set pass=md5('NEWPASSWORD') where uid = 1;

In short, we're updating the users table in the Drupal database, by setting the pass_word value for the the first user id (_where uid=1), to a md5 hash of the phrase NEWPASSWORD.

If you don't haves access to drush, nor the command line, but you still can change a file over SFTP, you can to the same by adding a snippet like this on to the site:

  $doh_forgot_my_password = db_query("update users set pass=md5('NEWPASSWORD') where uid = 1;");

Of course you really should be paramaterizing this like so:

  $doh_forgot_my_password = db_query("update users set pass=md5('%s') where uid = %d;", array("NEWPASSWORD", "1") );

But this given the fact that this snippet should only existing in a template for 15-20 seconds at the most you'd probably be forgiven for taking the short cut...

Quick heads up on a super handy cheat sheet for Drupal

I'm working with Drupal a lot at work now, and in the process of creating my last post, I stumbled across this cheat sheet for developing with Drupal.

I had no idea could pull ddebug_backtrace() to get an instant stacktrace, or DARGS() to see the arguments being passing into a function at any point, or DD() to log directly to a text file (which you can tail in a terminal window to see stuff as it happens).

Now, if only Simpletest wasn't so depressingly slow...

How to setup Snow Leopard for LAMP development and debugging

Over this weekend, I've been looking at ways to make it easier for me to work with PHP, largely because I've been using it more and more at work, and I've felt spoilt by the tools available when using Ruby, like the [ruby debugger], or when playing with Django or Twisted, python's own interactive pdb debugger, so I decided to give MacGDBp a try.

It's pretty handy - it allows you to step your code when things break, giving you an idea of what exactly is happening, under the hood, or letting you understand the actual a request takes through your code.

I'm not using MAMP, because where possible, I'm trying to minimise duplication of software on the my computer, and previously I've confused myself something horrible with trying to keep track of multiple instances of MySQL and Apache before, so if you're not using MAMP, following these instructions should help get you set up with a decent debugging tool, on Snow Leopard, and along the way get a fairly maintainable lamp stack, if you're not enamoured with MAMP.

Get PHP 5.2

This isn't strictly necessary, but right now, I've found PHP 5.2 to be less hassle when developing than 5.3, and for time being, Drupal and WordPress to be standardising on it first before moving to the newer version, which means I am too.

The best instructions I've found to do this are here PHP 5.2 for compatibility with Drupal, and provide a good introduction to the amazing Homebrew, which you really should be using instead of Macports or Fink, if you're not already.

Lets get the hard commandline work out of the way first.

Setup PEAR to complement Homebrew's PHP 5.2

Just like how we have pip or easy_install with Python, CPAN for Perl, and Rubygems for Ruby, with PHP, we have PECL, and PEAR. PEAR is repository of PHP classes, like PHP Unit for unit testing, whereas PECL is a repository for C extensions like APC or Memcache, that help with performance, and caching, or link to other programs.

By default we do have PEAR and PECL installed, but in keeping with Homebrew's example of storing stuff in /usr/local/ to avoid needless sudo'ing we're going to use our own versions of PEAR, using this handy one liner. What's happening here is that we're using curl to fetch the data at http://pear.php.net/go-pear, and then streaming it into the php command:

    curl http://pear.php.net/go-pear | sudo php

We'll be presented with some text, once we've entered our admin password, along the lines of:

  Welcome to go-pear!
 
  Go-pear will install the 'pear' command and all the files needed by
  it.  This command is your tool for PEAR installation and maintenance.
 
  Go-pear also lets you download and install the following optional PEAR
  packages: PEAR_Frontend_Web-beta, PEAR_Frontend_Gtk2, MDB2.
 
  If you wish to abort, press Control-C now, or press Enter to continue: 
 
  HTTP proxy (http://user:password@proxy.myhost.com:port), or Enter for none::

We probably don't need a proxy, so you can just hit enter, at which point we'll be asked the following:

  Below is a suggested file layout for your new PEAR installation.  To
  change individual locations, type the number in front of the
  directory.  Type 'all' to change all of them or simply press Enter to
  accept these locations.
 
   1. Installation prefix ($prefix) : .
   2. Temporary files directory     : $prefix/temp
   3. Binaries directory            : $prefix/bin
   4. PHP code directory ($php_dir) : $prefix/PEAR
   5. Documentation base directory  : $php_dir/docs
   6. Data base directory           : $php_dir/data
   7. Tests base directory          : $php_dir/tests
 
  1-7, 'all' or Enter to continue:

We want to change the installation prefix, to make sure we're putting this stuff into /usr/local - I found that I had to do this explicitly, because the default value for $prefix, ended up with me getting build errors.

Once we've added this, we'll get a load of text flying past as PEAR is built, and end up with some text saying something like:

    WARNING!  The include_path defined in the currently used php.ini does not
    contain the PEAR PHP directory you just specified:
 
    If the specified directory is also not in the include_path used by
    your scripts, you will have problems getting any PEAR packages working.
 
    Would you like to alter php.ini ? [Y/n] :

Why yes, we would like this added. This will add this snippet to your php.ini file:

    ;***** Added by go-pear
    include_path=".:/usr/local/PEAR"
    ;*****

One important thing here - now that you have this, make sure your PEAR

Now, any thing you add to pear will be available in future, so installing PHP Unit for unit testing is as simple as calling this on the commandline (note):

  pear install phpunit/PHPUnit

In a similar fashion to Rubygems, these libraries of classes are available by calling require_once as if you were in the path /usr/local/PEAR, so to pull in PHP Unit, you just type:

  require_once 'PHPUnit/Framework.php';

Setup PECL with PHP 5.2

Now that we have PEAR setup, we should spend a it of time on making PECL simple to administer without sudo. All I needed to do here was check that everything in /usr/local/Cellar/php52/ belonged to my normal user account, by calling:

    sudo chown -R /usr/local/Cellar/php52/

This does away with the need to compile things as root, and makes installing extensions like Memcache, Mongodb, APC, as simple as:

    pecl install apc

If you find yourself having trouble here, you may want to check the permissions on any extra directories created during the PECL installation process - I had a couple of issues because some extensions had been installed as root earlier when following Hunter Ford's instructions at Cupcake with Sprinkles, which stopped my user account being able to install into the nested directory structure, because my account was trying to put files into directories owned by root.

Fetch Xdebug

If we didn't need xdebug, this would be all we needed, but sadly things aren't as simple as that. If we try calling something like pecl install xdebug, we DO get a version of debug installed, but it's not quite what we need, and doesn't seem to do anything useful. If we want this to work with MacGDBp, we need to install the version compiled by the chaps at Active state from their Remote Debugging page, that's designed to work with their own IDE, but also other tools. Once we've downloaded the tar file, we need to choose the correct version,

And then copy it to where the other extensions are:

    cp xdebug /usr/local/Cellar/php52/5.2.13/lib/php/extensions/xdebug.so

And make the relevant changes in our php.ini file for them, to point to the xdebug shared object (that's the .so suffix on extensions), and provide a few defaults as directed on the MacGDBp help page:

  ; Adding xdebug
  zend_extension=/usr/local/Cellar/php52/5.2.13/lib/php/extensions/xdebug.so
  xdebug.remote_enable=1
  xdebug.remote_autostart=1
  xdebug.remote_host=localhost
  xdebug.remote_port=9000

In short we're telling xdebug to switch on by default, and and listen on localhost:9000 for any clients that want to connect to it when we want to inspect what's happening with code.

Finally turning on MacGDBp

Still with me? Good. At long last, we can finally start looking through code in our debugger. Fire up MacGDBp, and when you run your next PHP script, you should get to see something like this pic pilfered from Particle Tree's own post on this subject:

There's far more to this, but in general the key to using the debugger is knowing how to set breakpoints, and remembering that your choices in the blue buttons are (from left to right)

  • stepping into functions to see what's happening as a request is passed from function to function
  • stepping out when you no longer need to see what's happening in a particular function
  • ...and stepping past a particular function, without needing to look into its working at all

The green button is a fast-forward button, to take you to the next breakpoint you might have in the code, and the green power button is analogous to a power button, to refresh a connection to the debugger. You really, really should look into this post here by Tim Sabat at Particle Tree, and this one here by Matt Butcher, at TechnoSophos.

Debugging without MacGDBp

Of course, you don't need to use MacGDBp all the time, In most cases, just having a stack trace will help find the source of the error. Once you've got xdebug setup, you should get a handy track trace whenever you throw an error - if not, be sure to make sure you have the display errors setting set to 'on' in your php.ini setting (it should be around line 370 in your file, normally):

    display_errors = On

By the time you've followed these steps, you should have an easier to main installation of the LAMP stack, with a simple way to install extra PHP libraries and C extensions, a couple much more effective ways to debug than simply typing echo $variable everywhere, and an ideal environment to get on with hacking on Drupal, WordPress and any other PHP code in future.

Phew!

This guide is largely a synthesis of earlier blog posts about this subject by Justin Hileman, Matt Butcher, Tim Sabat, Hunter Ford, and Boris Gordon, I'd recommend subscribing to their blogs, as they definitely know more about PHP than I do, and they're a great source of handy info.

If there's anything that isn't clear, please let me know, so I can improve this guide - I don't want anyone else to have to blunder through working out all this themselves in future, as working all this out has taken, far, far longer than I'd like, and well, it seems churlish not to share now that I've got a setup that seems fairly stable.

Now, time to actually do something useful with it...

Looking for the best way to keep data on servers safe

We know we should all be doing it, but most of us don't do it enough.

I put out a request today to my followers Twitter asking this question:

I've had the following services recommended to me by a number of fellow developers whose opinions I have a lot of respect for:

A couple of ex-Headshifters pointed me to Duplicity, a free tool that also looks very promising. Using it looks pretty straight forward.

    duplicity /home/my_directory scp://backup@other.host//usr/backup

A variant of it can be used to backup to Amazon S3 too, and there's a helpful blog post to show how to do this here, if you use Ubuntu or OSX.

Kalv recommended Safe, a clever Ruby gem by the Astrails crew that was originally designed to automate the backing up of Rails projects in a simple fashion.

One service that looks really interesting is though is Tarsnap, as recommended by [Jon Gilbrath][]. It's similarly simple to use, but takes care of the some of the more awkwards backup issues, and saves you having to setup your own S3 account.

  # Create an archive named "mybackup" containing /usr/home and /other/stuff:
  tarsnap -c -f mybackup /usr/home /other/stuff

The developer, Colin Percival also has written quite extensively about how it works on his own blog - he's only making a few cents per gigabyte on providing this service for people, yet it still looks to be something viable for him to run - amazing.

It looks like it's almost exactly what I'm after - the only thing missing is the option to back up storage inside the EU. This requirement is mainly one coming from data protection concerns from previous clients, because rules for processing data and storing in in the EU are different to that in the US, but given the strength of encryption, I'm not sure how much of an issue this really is, these days.

I'd really appreciate some light shed on this actually - technology is moving so much faster than law these days, it's frustrating not being able to take advantage of these kinds of services.

Anyway that's what I've found. What do you use, and why?