Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; AP_Twitter_Follow_Button has a deprecated constructor in /var/www/vhosts/ on line 14

Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; latest_twitter_widget has a deprecated constructor in /var/www/vhosts/ on line 12

Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; wpSpamFree has a deprecated constructor in /var/www/vhosts/ on line 3890
Greg MacLellan » 2005 » November

Greg MacLellan

November 30, 2005

Phishing IQ

Filed under: General — groogs @ 9:15 am

I just took the MailFrontier Phishing IQ Test II, and I didn’t do well. As a web developer, I consider myself very knowledgable in this area, and I’m pretty sure I’ve never been fooled by a phishing email. So why did I do so badly? Simple: I guessed "phishing" for everything.

I never click on any links from email that will lead to me entering account information. All of their sample emails had these links, so I didn’t trust any of them. Why do companies still do this? If you read through the answers after taking the test, all of the legitimate messages say "Be safe – always enter the address in your browser". This is good advice.

There was one message that was simply "providing information", and therefore not considered dangerous. But consider such an email.. "Here’s some information about our new online bill payment service" with a link to a site that contains a bunch of information talking about the benefits of online bill payments, showing how easy it is, etc. Also on that site for customer convienence is a "add to my account now" button. If this is in fact a phishing site, that link is going to be a scam to collect your account data. If phishers aren’t doing an attack this "sophisticated" now, they will be soon.

According to, only 4% of test-takers got 100%, with the average score being 75%. At least that’s up from 61% a year ago.

So what is the solution? Well, since e-mail is fundamentally broken there really is no easy technological solution, besides outright replacing the SMTP protocol with something better.

One start would be to simply not include links in e-mail. Companies should generally make sure that anything they send in an email can be done manually by visting their site, and provide instructions on how to do so.

The catch here of course is that users need to be trained to recoginize (for example) their bank’s URLs. If your banking site is and you get a request to go to or then that needs to set off alarm bells, and I’m not sure most users will recoginize that.

Another solution may be to develop a new specially-trusted high-security certificate authority that has very stringent requirements for granting certificates to companies. I remember the first certificate I got cost about $200 and required my driver’s licence, vendor permit, and some other official company documents (yes, this stuff can be spoofed too, but it is a bit more work). Those days are gone. Now it’s very simple to get a certificate that will be "trusted" in pratically every browser – it costs about $40US, and sends one email as a verification.

If all browsers displayed a site signed with one of these special certificates differently, then this would be a major way to stop phishing. Interestingly, the people working on Firefox, Internet Explorere, Opera and Konqueror are all working together on this. They’d show the address bar in green on one of these sites, and users would need to be trained to not enter any sensitive information on a page without a green address bar.

IE7 beta address bar (from arstechnica article, originally from MSN)

The special ceritficate authority could either be a root CA for current certificate authorities, or just be an organization that would publish a list of trusted root certificates. Either way, the organization would have to audit the people issuing certificates that showed up green to ensure the companies weren’t making things easier to gain an edge on their competition. The entire system is based on the trust of the root CA’s, so if those authorities violate the trust and issue a certificate to an illegitimate phisher that produced some phony documents, the whole system breaks down.

So until those smart folks at Mozilla, Microsoft, Opera and KDE save us, has some useful tips to avoid being scammed.

November 25, 2005

The Complex World of Toothpaste

Filed under: General — groogs @ 12:12 pm

I finally decided this morning that I had really squeezed everything I could out of my tube of toothpaste, after a couple of days of thinking the same thing. Finally today on my way home from work, I remembered and stopped to buy some more. I have to say, I just don’t understand the toothpaste industry.

As an example, here are the toothpastes Colgate offers:

  • Total Advanced Fresh – anti-bacteria to fight tartar etc, and freshens breath
  • Total – anti-bacterial, flouride for cavity protection (also available as gel).
  • Total plus whitening
  • Total Fresh Stripe
  • Sensation Whitening (also available: tartar fighting, and baking soda and peroxide)
  • Cavity Protection (green mint, winterfresh paste, or gel)
  • 2-in-1 whitening (fresh mint or tartar fighting)
  • Fresh Confidence with whitening – freshens breath, whitens teeth
  • (plus some other children ones, bubble-gum flavoured, etc. that I won’t bother to list)

Now, when it gets down to it, there’s really one thing toothpaste has to do: clean your teeth. Freshening your breath, preventing cavities, fighting tartar, whitening teeth – these are all good goals, but what exactly makes them mutually exclusive?!? I seriously don’t get it. Is it that hard just to make one kind of toothpaste that: cleans teeth, freshens breath, prevents cavities, fights tartar and whitens teeth? Or even make two – one with whitening, one without (after all, you don’t want your teeth to be too white).

Instead, the toothpaste companies feel it is necessary to make about 16 different products – each – causing people trying to buy toothpaste to stand slack-jawed in front of the display for 10 minutes trying to figure out which of the 100 possible choices is the best.

Ok so maybe I was bit harsh earlier. Two kinds of toothpaste is perhaps a bit slim. Really, it’s not a bad thing to have choice. Some people like mint, some don’t. Some like gel, some like paste. So why don’t they make combinations of those? Well, they sort of do. Except they’re all sub-variations of the different combinations of breath freshening and tartar fighting types, which just complicates things even more. I’m a simple man, I just want a simple toothpaste.

Now it’s time to go brush my teeth so I can go to bed.

November 23, 2005

Never enough sleep

Filed under: General — groogs @ 2:10 am

A few weeks ago, I took some interest in sleeping, sleep cycles, and things like polyphasic sleeping.

There are a ton of different theories out there as to what is the best way to get the highest quality sleep. This is of interest to me, as I’m someone that generally stays up late and hates getting up in the mornings. I seem to get a lot more work done at night, and maybe that’s just a mindset but it’s been true for a long time.

For many years, I’ve gotten just a little amount of sleep during the week (say, 5-6 hours), followed by a lot of sleep on the weekends (10-12 hours) – when I can, anyways. For the most part this seems to average out and work well, though apparently it’s not supposed to. I do notice that if I don’t get a lot of sleep on the weekend, I am tired all the next week and usually end up going to bed a lot earlier on one or two days.

So anyways, after deciding that I don’t want to become a polyphasic sleeper, or drastically change my lifestyle, my research led me to the conclusion that the best thing to do is try to live with sleep cycles.

Simply put, a sleep cycle consists of five stages of sleep. You go from a light stage, to a couple deeper stages, to the deepest stage 4 and stage 5 (REM) sleep. The whole cycle lasts approximately 90 minutes, and occurs constantly while you’re sleeping. If you wake up during the deepest stages, you feel groggy and tired, like you just want to go back to bed.

The best time to wake up is during the period where you’re in a light sleep between cycles, and in fact, if you were to not have an alarm clock or any other outside stimulus to wake you up, this is when you’d wake up naturally. Of course, most alarm clocks don’t know when you’re in this cycle, so they just wake you up with their .. ahem, pleasant .. noises whenever they are set to do so.

There are alarm clocks that actually monitor your sleeping, and I’d be interesting in trying one, though I’m a bit skeptical of how well they’d work. They all work on the basic idea that they go off during your last light sleep cycle before the time you’ve set to be waken up at. For example, if you set the alarm to get you up at 7:30 am, and you are in a light sleep phase at 6:20, it will wake you up then, as your next light phase should be at 7:40, which is past your wake time.

One of these devices is a watch that monitors temperature, body tension, etc, and statistically determines your cycle based on that. I’ve read complaints that the beeping is too quiet, and while the theory is you’ll be in a light sleep when it goes off (and thus easy to wake), if for whatever reason you aren’t, then it isn’t loud enough to really wake you up.

Another is a wireless headset that you wear, that connects to an alarm clock. The headset actually monitors your brainwaves to watch for the proper part of the cycle, and then the alarm goes off. This sounds a lot more reliable, but is apparently pretty uncomfortable.

It seems to me that the two would make a good combination – a watch to monitor your physiological functions, that connects wirelessly to an alarm clock that actually wakes you up.

So back to the sleep thing. I’ve been conscious of my sleep cycles for the past few weeks (though lately I’ve not been as diligent). I try to set my alarm to some multiple of 1.5 hours from when I’m going to bed (plus a bit, depending on how long I think it will take me to fall asleep). I have to say, I generally do feel better when I sleep 4.5 or 6 or 7.5 hours (note too, that the 8 hours that is supposed to be the proper amount of sleep is actually interrupting a cycle). I’ve even noticed that I feel better after getting 4.5 hours of sleep vs 5.5 hours (though, this could be concidental and caused by other outside factors – it’s not like I’m doing a controlled experiment here!).

Of course, when it all comes down to it, I really do enjoy my morning sleeping in and no amount of sleep research will change that :)

November 18, 2005

Offline Wiki

Filed under: General — groogs @ 3:56 am

This is an idea for a program I’ve been tossing around for a while, and at this point I haven’t been able to find anything like it. Basically the idea boils down to an ‘offline wiki’ – a Wiki you can use locally, but that will also synchronize on the internet, letting you use it anywhere.

I really discovered the power of Wikis about a year ago when I first started to use Trac (which is an excellent source code / project management tool that I can’t recommend enough). I found the Wiki component very useful for creating documentation on the APIs and protocols I was developing, as well as a notepad for designing overall concepts (which naturally evolves into documentation for the progarm) and collecting research notes. It’s a very fast way to keep a growing collection of documents, keeping them fairly organized in the process.

At the same time, I find I use notepad on my laptop a lot to save quick notes or describe ideas. Much of the time I don’t have internet access – like when I’m sitting in an airport, on a ferry, waiting for someone on a job site, etc – so all these notes just end up on my desktop (or eventually deleted or in folders when I get around to sorting them). Most of the stuff would be gerat to have in a Wiki, but the most important part is being able to edit them while I’m offline.

Really, the simple solution is to install a webserver an a wiki locally on my computer. I’m a fan of simple solutions, but at the same time, this does mean that without my laptop I don’t have any notes. What would be the best case is to have a Wiki I can edit on my computer, or go online and edit from any internet connection. It should synchronize any changes in both directions whenever my laptop is connected.

I think this is actually quite feasable. The wiki would need to edit flat files (not use a database). Rsync could then be used to synchronize changes back and forth, and just be set to run periodically. The entire thing (a webserver, php, wiki software, rsync) could be bundled into one installation file (and though the install code would be different, it could easily be cross-platform for both Windows and Linux. Hell, even OSX could get in on the action).

The most difficult problem would be handling conflict resolution, when there’s two conflicting changes that rsync can’t handle on its own.

I’m wondering if it would be better to actually use a database (it would be nice to avoid running a database server on the laptop though). A "last synchronization" time could be stored, and then the synchronize script would just have to look at entries created/modified since that date. This would mean the synchronization code would have to be written from scratch, but that also means it could be written as an actual web page in the application, that prompts the user on how to handle conflicts as it comes across them. It could communicate to the server using REST.

There should be something in the page header (on the site running locally) that tries to connect to the server, and if it can, and there are any new entries (on either side) since the last synchronization time, then display a "Changes have been made. Click here to synchronize to the server" button. If it can’t connect, then it can just display a "You are working offline" button. Changes locally while online could even be posted to the server using REST, so no synchronization is necessary. It might even be possible just to do synchronization in the background silently, only notifying the user if there’s a conflict to handle..

Now my random thoughts are turning into a decently complicated application. That’s really why I’m posting this on my blog: I have enough to do already, someone else take this idea and write it ;) Do it GPL or similar, and I’ll probably even help out a bit.

November 17, 2005

SOAP: Gives it a REST

Filed under: General,Technology — groogs @ 11:51 pm

I’ve noticed in the last little while that there seems to be a trend happening with web services: people think SOAP is too complex. I keep coming across articles and comments talking about how web services are just over engineered. I have to say, I totally agree.

Here’s some excripts from a c|net article:

A debate is raging over whether the number of specifications based on Extensible Markup Language (XML), defining everything from how to add security to where to send data, has mushroomed out of control.

Tim Bray, co-inventor of XML and director of Web technologies at Sun Microsystems, said recently that Web services standards have become “bloated, opaque and insanely complex.”

This isn’t something that’s new. An onlamp article from 2003 talks about how people use REST over SOAP:

While SOAP gets all the press, there are signs REST is the Web service that people actually use. Since has both SOAP and REST APIs, they’re a great way to measure usage trends. Sure enough, at OSCon, Jeff Barr,’s Web Services Evangelist, revealed that Amazon handles more REST than SOAP requests.

Personally, I’ve always gone with so-called REST interfaces if I have a choice. I’ve in fact been using REST for many years, without realizing it was called REST (the term, which stands for Representational State Transfer, was coined by Roy Fielding in his doctoral dissertation in 2000).

Put simply, SOAP just requires so much setup and overhead to do what should be a simple task. As a programmer, I like to actually make working code, and I hate writing tons of ‘helper’ code that essentially doesn’t actually do anything. That’s what I feel like I’m doing when working with SOAP and writing WSDL schemas and all the extra junk. There are APIs to make things eaiser, but in the end it’s still an over-engineered protocol.

REST keeps it simple, with really no formal definition. It’s more a method than anything else. Go to a URL, get a bunch of data back. Talk about simple.

Advocates of REST push that you should only even use normal HTTP GET requests for retreiving data (as opposed to POSTing a complex XML query like SOAP does). This makes sense, as one of the ideas behind the web to begin with is that any piece of information can be obtained with a URI. This makes REST rediculsly simple, and for this reason some people hate it, others love it. It definately makes thing easy to debug, as you can test responses in your browser.

Most REST services return a response in XML, but they don’t have to. If all you’re trying to get is one piece of information, it’s just as easy to just return that information in the body, with no tags at all. The querying application doesn’t even have to parse anything. Obviously this has it’s downfalls (like not being easy to returning error conditions or multiple values), which is probably the reason reponses are usually XML. Of course it helps that there are XML parses available for virtually every language, including JavaScript. In fact, REST is basically what drives most AJAX applications.

Of course, SOAP is XML and therefore human-readable as well, but let’s look at the diferences with a small example.

SOAP example

This is an example SOAP request for getting the price of a product.


POST /SOAP HTTP/1.1 Host: Content-Type: text/xml; charset="utf-8" Content-Length: nnnn SOAPAction: "Some-URI"

<SOAP-ENV:Envelope xmlns:SOAP-ENV="" SOAP-ENV:encodingStyle=""> <SOAP-ENV:Body> <m:GetMarketPrice xmlns:m="Some-URI"> <symbol>PART304285</symbol> </m:GetMarketPrice> </SOAP-ENV:Body> </SOAP-ENV:Envelope>


HTTP/1.1 200 OK Content-Type: text/xml; charset="utf-8" Content-Length: nnnn

<SOAP-ENV:Envelope xmlns:SOAP-ENV="" SOAP-ENV:encodingStyle=""/> <SOAP-ENV:Body> <m:GetMarketPriceResponse xmlns:m="Some-URI"> <Price>50.25</Price> </m:GetMarketPriceResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope>

REST example

Just a simple GET query:

GET /REST/getprice?symbol=PART304285 HTTP/1.1
Content-Length: nnnn

and the response:

HTTP/1.1 200 OK
Content-Type: text/xml; charset=”utf-8″
Content-Length: nnnn


Considering they both do the same thing.. which one looks simpler?

Of course, for a REST service to be useful, just like any other API or tool, it needs to be well documented. This means documenting the request parameters, all the things it can do, as well as all the output formats. SOAP has WSDL that I guess provides this information (to a point, and assuming you know enough to make sense of it all), but I don’t think that feature alone is worth all the other baggage SOAP carries.

To be honest, I also only have limited experience using SOAP (for the reasons that I’ve outlined in this entire post), so I’m quite willing to hear arguments to convince me why SOAP could be more beneficial than REST. At this point however, I can only conclude that SOAP is just over-engineered. Why bother coding all that extra junk when you can just REST? :)

November 16, 2005

gotta live, gotta live, gotta live.. in dishtown

Filed under: General — groogs @ 10:12 pm

I was really not going to post another small entry, but I couldn’t resist this, as it just made me laugh. A small town in texas has renamed itself to “Dish”, in exchange for 10 years of free satellite TV for all residents.

The city council meeting to vote on the name was packed on Tuesday night and about 12 people — 10 percent of the town’s population — stood up to support the name change, which passed unanimously.

So if you really love your TV, and want to move to a town that really loves their TV, I guess this is the spot for you!

(Note that I refrained from making any comments on the resolve of american society, fitness of couch potatoes, and the high-quality television programming that exists today)

read more | digg story


Filed under: General,Technology — groogs @ 6:55 am

Finally got around to updating some of my Firefox plugins, and noticed a neat new enhancement to the Google toolbar. When you’re typing in the search bar, they added a ‘suggest’ feature (where it finishes words and complete search phrases for you), and even better (since suggest isn’t really new), is it shows you the number of results. (googlebar image) Even though there is a google search bar built in to Firefox, I still really like the toolbar. Notably, I use the "search this site" button a lot. Even when sites do have a search function, Google often just provides better results. Pagerank display is good, and the highlight function (that will highlight search terms on the page) is really nice on long pages. If you haven’t tried it before, I do recommend installing the google bar for a few days. It does take up some screen real-estate, but I think it’s worth it.

November 15, 2005

It happened..

Filed under: General — groogs @ 6:49 am

It finally happened. I created a blog. What in the heck am I doing..? Well I figured that I will post some stuff here randomly. Why not? See what this whole blogging this is all about. I feel like I’m missing out. Now I can click those "blog this" links, and "trackbacks" become useful I guess. Well, we’ll see. I’ll more likely use this as a place to put various ideas, geeky computer solutions and hacks I come up with, and who knows what else. In my job, doing IT, software development on far too many projects, network and server administration, and everything else, I often do a bunch of research into a wide variety of topics to come up with specific solutions to problems or provide a new service, and I really don’t have any place to post stuff like that. Now maybe I’ll do that here. Of course, I may also just ignore this, and never post here again.. haha.