• Category Archives Internet
  • Heartbleed, the media, and passwords. I might be annoyed.

    This is a rant. It’s a long one. I’ve not proof-read it much, there’ll be mistakes.

    Opening

    So, unless you’ve been hiding under a rock of late, you’ve heard about Heartbleed. Heartbleed is a bug in one of the core programs used in the open-source world to keep secret those things you need, like credit card details. This particular bug is important, because it can leak information that shouldn’t be leaked, like credit card details. Just click the link above, it gives a really good basic idea as to how it works. It mainly affects those things protected by SSL.

    So, now that everyone knows what it is, why is it important? The information leaked can be anything that the computer (hence -forth called “server”) responsible for keeping the website involved on the internet has in it’s memory. That can include, requests for websites, file transfers, emails, ssl certiticates, ssl keys, credit card numbers and passwords.

    Passwords, memory and maths

    Now that last one, that’s the one the media, and certain people, have been shouting about. This bug has the small potential to leak passwords. However, this is totally not as serious as it sounds. Passwords are only kept in plain text for a short time – normally, as long as it takes to hash them (one-way-encrypt), and check them against a database. So, your passwords aren’t sitting out in the open, for anyone to steal. Additionally, you have to have entered your password within a second (or two at the latest) of someone using this bug to pull information from a server. As problematic as this bug is, it’s limited. It lets you get 64 kilobytes of information from the server memory. That sounds a lot, till you remember that modern servers have up to 16,777,216 kilobytes, or 262,144 blocks of 64KB. Even servers a few years old (and in server terms, that can be really quite old) have 4,194,304 kilobytes, or 65,536 blocks of 64KB. So, someone has to have managed to use this bug, to grab exactly that block at the right time, to get your password. Also, trust me, we would notice if someone started reading that much information out of our servers constantly. It would be obvious something was wrong. Additionally, not every server is vulnerable to this weakness. Those running IIS, or an older (but still patched) version of operating systems used to host websites remain safe. It’s something like 2/3rds of sites, and crucially, only those 2/3rds of servers setup for SSL.

    So, why all the “RESET ALL YOUR PASSWORDS!” screaming? There is a small chance of grabbing an SSL key. Now, due to the way this bug works, this is more likely than other things to have happened. Why is the key important? It’s the set of random numbers that says you ‘own’ a certificate. So in theory, it can be exposed. Why is this a problem? With the key, you can pretend to be the person for whom it was created — if you got google.com’s key, you could pretend to be google.com. Now, this *still* isn’t that easy to use, you basically have to perform a Man In The Middle attack, which is hard, and complex, and will only get you really limited information, depending where you can do it.

    No, this is not as serious as it sounds

    So, why have I been tweeting lots saying you shouldn’t rush out to reset all your passwords? Three reasons. The first; the likelihood of anyone actually getting your password is really, really really small. Remember, there’s that (at best) 65,536 places your password could be, and only 2 seconds to find it before it vanishes. Per affected website. Add that to the fact that these bugs are hard to find, and using them to get information is hard. Using them to get useful information is also hard – all the bug comes back with is a load of data you have to run through conversion routines to get anything out of. Additionally, due to the way this data is stored, there’s no guarantee it’ll be easy to match your password to your username, which is crucial if you don’t want to have to guess usernames.

    My second reason is one of worry about the affect telling those who aren’t used to strong password security will have. You’re going to be telling people to dump every single one of their current passwords and start again. It’s already really bad – the top 2 passwords of last year were “123456” and “password”.  So, though I have no studies on this, I would bet, with hard cash, that forcing those not using good passwords to reset their passwords with fear, will weaken passwords as a whole. I suspect that we’ll find a lot more weak passwords, and a lot more passwords shared amongst websites in the next few batches of password leaks.

    Finally, my third reason. Evidence. We’ve had no evidence of large scale, source-less password leaks recently. Hackers, especially some of the nicer ones have a habit of dumping their finds publicly, and a large-scale capture of passwords would show up in activity around the internet. Additionally, passwords aren’t the only thing heartbleed can expose. It can expose credit card numbers. And the credit card companies do not like sites to whom they’re traced back a hack. In fact, they have a habit of forcing said companies to go through a rigorous, lengthy, and painful auditing process, to find out exactly *how* the passwords leaked. The security community would have heard of these audits turning up nothing, of credit card data vanishing out in any significant quantity, or even the audits would have thrown up the bug.

    Media

    So, this password thing. It’s being pushed by the media, and by the guys who created the ‘heartbleed’ website as a much bigger impacting issue than it really is. Now that the bug is out in the open, script-kiddies will start using the heartbleed website, as will advanced state agencies. I’ve heard some rumours of people seeing internet-wide scans originating from state agencies, shortly after the bug was announced. So, it’s important that it’s patched quickly, it’s a big problem for the tech community, but with the low chance of password exposure, it’s not that important. So, why are the media saying “CHANGE ALL YOUR PASSWORDS”. Two reasons mainly, first is that’s a far better headline than “There was a bug. We’ve fixed it.” The second, is that that’s the response we, the hosting & security community, have ingrained as ‘the’ response to any sort of compromise. Yahoo got hacked? Change your passwords. last.fm got hacked? Change your passwords. So, when they hear about this hack, which they do not understand, they fall back on the thing they know, and since this bug affects ~60-70% of ssl protected servers, they think “ALL” instead of just a limited set.

    Responsible Disclosure – how not to do it

    In my opinion, the heartbleed release is a perfect example of how NOT to do responsible disclosure, no matter what certain lucky parties claim. First, create a website with inflammatory content. Then, get those who have insider access to patch. But crucially, don’t inform operating systems before you make it public. Don’t let anyone know in the security teams of Ubuntu, Debian, RedHat or SUSE. You know, just the people who actually have to *create* and *deploy* the patch to the millions of affected servers. Don’t let big publishers or sites know (Yahoo, BBC, Facebook). Instead, publish your site, and wait for the shitstorm to hit, as the media companies take this up, shout about it, and make customers scared.  Now, in a boon, the debian OpenSSL team got a patch out for this bug, 30 minutes after they had a bug report. But they didn’t have a bug report when heartbleed went public. No, the bug was reported hours later, after the viral-news effect had got around to someone who knew where and how to report a bug in debian’s bug tracking system.

    Other, big bugs

    You know, there’s a package that runs a good 22% of the internet. In the past week, they published a really critical bug, one that allows remote authenticated access to their sites. This package? WordPress. The bug will allow an attacker to gain administrative-level access to any wordpress site. In actual damage terms, this bug will cause me far, far, far more grief, and likely our customers, than the heartbleed ever will. Heartbleed was patched out in our network in the space of a few hours, with some minor services taking maybe a day or so. If we’re not running a vulnerable version of WordPress on our network, this time next year, I’ll eat my hat. If some clever black-hat hasn’t written an automatic compromise bot, to exploit this within the next few months, I’d be very surprised.

    Another package that had a critical security patch in the past week? Just an addon to wordpress, that a good proportion of wordpress sites also use; Jetpack. They found that they had another remote-access, post, and privilege escalation bug in their code. Again, this single bug will cause us far more trouble in the long term, simply because people won’t upgrade.

    Other, easier ways of loosing your password

    Every now and then, someone’s website gets hacked, crap gets uploaded. We trace it back to their computer, using their login details. What happened? Though we’ve never been able to say with 100% certainty, they were probably infected with a keylogging virus, that saw them typing in their (s)ftp login details, and which automatically used said details to deface their site. That has become less common in the last year, but it was almost a weekly occurrence only last year.  How did the keylogger get installed? Simple, our customers either didn’t have anti-virus, weren’t maintaining it, or actively ignored it’s alerts. They click on links in emails they’re not expecting, open files in emails they’re not expecting, and get infected. Just this week, something has been quite determined to infect me – sending me ‘delivery notes’, asking me to ‘print a zip file’. The ‘zip’ file was a Microsoft Excel .xls file, and likely not an xls file, but something quite nasty.

    Internet cafes. Ever used one to pick up your email? There’s a good chance that someone knows your email account password — those computers often have keyloggers installed, or have someone on the same network watching the net traffic, or intercepting it. Use that same password on paypal? Oh well, say goodbye to your money. Ever used a public wifi connection? You know, one of those unencrypted ones on your iPhone? Your iPhone logs into your email accounts without encryption? Say goodbye to your username and password.

    In closing

    Is heartbeat serious? For webhosts, yes. For users, in the brief period after heartbleed.com went live, till our servers were patched? Yes. Now? Not really. It could have been a lot better, and it could have been a lot worse. Hopefully, this will give the OpenSSL guys more resources to stop any future bug like this slipping through the net. Do you need to reset your passwords? Only if you connected to a vulnerable https:// site, in the brief period that the bug was around. Better would just to watch your bank statements, something you should be doing anyway. Use 2-factor authentication if you can. Use a password manager, my favourite is Keepass, with it’s database stored on Dropbox, and a key file stored elsewhere. Use separate passwords for every site, and don’t try to remember them, just auto-generate them using keepass’s algorithms.



  • [UPDATED]Useful Firefox addons

    2009 vs 2013 Useful Firefox [Browser] Addons

    Originally I wasn’t really into add ins then i got into trying loads of add ins and eventuallyi have whittled it back to the few firm favourites/favorites for the americans.
    [i was going to del fav bit but then I noticed that my firefox dictionary is still set to US because of it (which is what happened back in 2009 too haha)

    ubiquity beta addon for firefox - run, send email, new calendar event, update twitter.
    have not tried this yet. Read about it here:
    http://www.ghacks.net/2008/08/26/mozilla-labs-ubiquity-is-a-firefox-killer-application/
    or at mozzy labs: http://labs.mozilla.com/2008/08/introducing-ubiquity/
    [2014 Ubiquity has died, but you can still install the addon (download using the bitbucket link)]: https://addons.mozilla.org/en-US/firefox/addon/mozilla-labs-ubiquity/

    Favourites/remember this website

    Tag sifter

    Taboo – one click remember this, timeline

    Readitlaterlist.com Now Getpocket.com – my cuurent fav  [2014: Still using it today]

    Foxmarks

    Tabs

    Duplicate tab

    Tab kit [Plus]- organizer of tabs

    [2014 I haven't used this in a while]

    [New for 2013: Firefox: TooManyTabs                                            https://addons.mozilla.org/en-US/firefox/addon/toomanytabs-saves-your-memory]

    Testing this one right now!

    [2014 Tab Manager]

    Awesome enables Tabs of Tabs (another tab bar above so you can group tabs into projects, subjects etc)
    however not available for latest firefox and the other versions are buggy/not working. :(

    [New for 2014: Chrome: OneTab                           http://www.one-tab.com/ ]                           

    There really is one tab to rule them all!Fold all open tabs down to one and free all that memory. Edit what’s ‘open’, leave it just as a single tab, or reopen (one by one or all).

    I NEED to test this one when I switch back to Chrome!

    Session Manager – protector and saver of tabs! [2014: integrated session managers are pretty comprehensive now!]

     

    Download Them All

    [2014:Still very useful last time I used it a few years ago, but internet speeds have increased monumentally since 2009, so much so that download managers are not needed for that anymore. It's still a brilliant tool for downloading all images from a page for example]



  • bind refuses to restart, debian squeeze

    After an upgrade, I’ve noticed a few times that bind has refused to restart or reload, saying:

    Stopping domain name service: namedrndc: connect failed: connection refused

    This seems to be a permissions bug in debian, quite a long lasting one. In order to cheat-fix it quickly, I do the following:

    chown bind:root /etc/bind/rndc.key
    chmod 660
    /etc/init.d/bind9 restart

    That seems to fix it well enough. I think it’s a problem in that bind starts as one user, but runs as another. It may be that 440 are all the perms that are necessary. The debian bug report is here: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=169577



  • Magento Session Files

    Magento (the popular open-source online shop system) likes to store its PHP session files in ~/public_html/var/session/

    Most debian servers don’t have that in their cron job that deletes old session files.

    So, you probably want to set it to store it’s session files in the default location (/var/lib/php5) or alter your cron job (/etc/cron.d/php5)

    Fun!



  • Eaccelerator mirror / downloads

    Eaccelerator is insanely useful in my line of work. However, their main downloads are down right now, so I’m mirroring the latest version here:

    http://kirrus.co.uk/stuff/eaccelerator-0.9.6.1.tar.bz2
    http://kirrus.co.uk/stuff/eaccelerator-0.9.6.1.zip

    You can see the files sha1sums here: https://eaccelerator.net/wiki/Release-0.9.6.1

    Alternatively, if you’re scripting (we are), you can use the following to get my (‘up-to-date’) version:
    http://kirrus.co.uk/stuff/eaccelerator-latest.tar.bz2

    bz2.. because that’s the version we use here ;)

     



  • Web 3.0

    Web 3.0 is coming soon…

    Linking
    IMHO the Web 3.0 revolution will consist of websites and web apps from the 2.0 era becoming closer.
    I think that it will become easier to link together content across web sites to create new forms of content.

    In the Web 2.0 revolution was helped by blogs with authors linking together information in posts. (This I might add has been very useful to combat the slew of dodgy sites that  sit high in Google’s results but just spit back the search terms as results, nullifying your search. Nowadays I find use ‘blog’ in search terms, especially when looking for reviews.)

    I can’t wait until someone puts together a really good way of visualizing all this data. As the internet grows the importance of being able to sift through the available data and collate it into collections on particular topics is becoming paramount.

    I have been looking out for a system to visualize my internet links in some kind of subject oriented way with a timeline / time axis. So far the only thing that comes close is Basket Notes for KDE (screenshots). If only that were a web app! (if i had the motivation and focus, I’d turn my meagre php programming skills to that task myself, but alas like my sketched design for a social networking site written in my design book pre the advent of facebook, I think I’ll leave it to someone else!)

    I guess the closest web based similar system (I’m aware of) currently in operation is Wikipedia!

    Retrieval
    Look at the useful plugin Ubiquity, and the fantastically useful cross platform application and search launcher, Launchy for example. Both of  these are designed to give us quicker access to and search abilities for our data.

    Workflow
    Making computers integrate seamlessley to our lives rather than inturpting them.
    Today the focus of computing is shifting from _ to the workflow -how we get things done. I think this is essential because your average end user doesn’t care how things get done, just as long as they can get done.

    Digital Photographers often use a prescribed workflow when working on digital photos – ‘developing them’ as it were to bring out the best. PCPro Magazine suggests 1. Levels and Curves then 2. Colour adjustment followed by Sharpening. But I’m talking more than just the best sequence of events to achieve the best quality output. I’m talking about the process itself.

    Our brains think sequentially, each action is broken down step by step and steps performed one after another. A break in our concentration, or ‘flow’ impacts our effectiveness. This is especially true for people with ADHD (like me). Reducing the need for context switching.

    “Consider that it takes 15 minutes for a developer to enter a state of flow.  If you were to interrupt a developer to ask a question and it takes five minutes for them to answer, it will take a further 15 minutes for them to regain that state of flow, resulting in a 20 minute loss of productivity. Clearly, if a developer is prevented from flowing several times during the day their work rate declines substantially. “

    (Retrieved from http://softwarenation.blogspot.com/2009/01/importance-of.html)

    For example, downloading pictures from your digital camera and uploading them to facebook. Recently I’ve been using ‘Windows Live Photo Gallery’. Ugh, I know, but the point is it that Vista offered it to me, and it was an easy to find and add plugin that allows me to upload direct to facebook, where most of my photos end up these days.

    To download the pictures I simply flip out the SD card from my camera, and insert it into my laptop (useful laptop buying advice)’s SD card slot

    And that’s the point, people will take the path of least resistance/effort.

    Path of least effort Principle
    According to my observations
    like people walking down the high street striving to avoid collision with other pedestrians, my observation leads me to believe that everybody is operating on the principle of least effort, where the person you are approaching will attempt to take a path that will need the least amount of diversion from their original path in order to avoid collision, while you yourself will attempt to do the same thing.

    how does this come back to web 3.0?

    How many clicks does it take while searching for some long forgotten but relevant piece of information before a user will get bored and move on? [research advertising, google hotspots, number of clicks] Could it be as low as 3, and as high as 8?

    Unified User Interface
    Facebook for example. I was trying to find my note on laptops to include a link in this article, but alas my click on Notes from the home page only brought up a ‘feed’ of Notes. Where I ask is the Filter options that preside on everyone’s profiles? Why can’t I select ‘Just Garreth’ here too?

    If something like that is useful, it should also be Unified, that is available everywhere!

    In the time it took me to discover the ‘workflow’ to access my notes in this ‘fast/bitesize/information obsessed’ age my poor overloaded ADHD (video: ADHD impact on life) brain might easily have become bored frustrated and more importantly distracted and moved on…

    Availability
    Cloud computing and Rich Web Applications (Blog: Google and Rich Web Application)

    Organisation of Data
    TOC

    Concise
    It’s an inverse law – As our attention spans decrease, so the conciseness of the data we consume must increase ceterus paribus.

    Why do my spidey senses tell me facebook, not google may be the winner in the Web 3.0 revolution?

    1. Reduce the need for context switching
    2. Make data transfer between devices, programs and operating systems simpler and more unified
    3. Make data easier to locate and retrieve
    4. Make locating an open program/context switching easier and more natural – in doing so reducing the impact on flow by automatically knowing how to get back to the other program/where it is.
    5. Design and create more natural interfaces – e.g the Apple’s iPhone and iTouch.
    6. Consider how context switching works in our heads and apply this to UI.
    7. Work on unified User Interfaces as not to interupt flow

    What do you think? Leave some comments of your vision, and what you think of my ideas.



  • Windows Command Line Ping Replacement

    So the windows version of ping is really stupid.

    I was writing a batch script to mount up a network share that involved checking to ensure my NAS unit was turned on. The script is scheduled to run after the computer resumes.

    What I found out is that the built in version of Ping.exe is terrible at telling you whether the ping has returned successfully or not. I was checking the ERRORLEVEL – %ERRORLEVEL% variable to find out what ping was returning. It should be 0 for success and 1 or higher for a failure.

    What I found was, i was getting replies from the local pc (dunno why, leave me a comment if you know) and ping was reporting a success even though the correct pc failed to reply. The solution?
    Replace the Windows ping.exe with Fping. It has a lot more options and appears – from some initial quick tests – to correctly report the errorlevel.

    Kudos to Wouter Dhondt for developing it. I’ll update this post with any more news!

     

    image Fping vs Ping errorlevel return values



  • PC Gamer Rips off Rock Paper Shotgun

    Back in June of this year, PC Gamer launched a new website. This website design appears to be a rip-off of that used by Rock Paper Shotgun. With all the images that follow, click through for a larger version.

    But, let’s roll back shall we? Rock Paper Shotgun launched September 2007, though their first post goes back to July 2007. They were a novel pc gaming blog site, trying to do something different in the gaming scene. They concentrated on PC games and only PC games, with running jokes. They have a small enough set of writers, that you can pick up the personality of each. (Kieron takes the weird ones, VERY NSFW: example.)

    Back in 2007, pcgamer.co.uk redirected to a sub-site of www.computerandvideogames.com. Since then, they haven’t altered the design at all. Now, it redirects to pcgamer.com. Looking at the two  reveals this:

    website

    As an ex-web-developer, it looks to me like someone decided that they quite liked the RPS type website and went ‘make me a website like that, but in this style’. And tweaked the mock ups (and site designs) a few times, till what they had looks remarkably like what we see now.

    Saying that, of course, this is quite a standard design style. It comes quite often easily when you use WordPress as your back-end engine, as this blog does, and as RPS does. However, they’ve not just used the site layout of wordpress as a base, they’ve decided to publish all of their posts in the same sort of format as RPS, with the same aim at getting discussions around their posts via the commenting.

    A little birdie 1 tells me that someone at future (the company behind PC gamer) really might hate Rock Paper Shotgun. Would rather they disappear. It’s almost like, they’ve finally decided to fight this sphere of influence, with money, and lots of people, finally decided that maybe their website is worth working on and taking care of.

    What annoys me, is that the big guy is trying to kill the little guy :(

    Here are a whole load of screenshots, save you finding them. Some are from Wayback machine, some are from the website directly.

    The old website, up till June. This image was recovered with a lot of hard work from webpigeon, of unitycoders.co.uk, (thanks!) since PC gamer used some really horrible website coding, which broke the waybackmachine copy. This has to be one of the ugliest websites I’ve seen, though not the worst. You could switch the big image, and below it was a list of recent stories.

    how pcgamer.co.uk looked like till June.

    And, if you scroll down a bit..:

    Old PC gamer site in wayback, scrolled down.

    They seem to be trying to throw links at you, lots and lots and lots of them, in a really small space. Check it out for yourself.

    Rock Paper Shotgun’s footer:

    PC Gamer’s footer:

    OOo… don’t they look similar? Apart from the ‘we must keep up with the cool kids’  twitter panels and lots and lots of post links (which RPS doesn’t force on you, or puts in the right hand panel). This mess could also be due to Search Engine Optimization, that dark art in which you try to trick search engines into putting you higher up on their listings than your arch rivals.

    Now, I work for the company that keeps RPS online. I like the guys that work there, I think they do a good job, especially considering they’re not getting paid much from it.

    Also interesting, is the fact that PC Gamer seem to have thrown money at this venture. I work with some high-load wordpress-powered sites, and there is some very obvious things you do to make them work fast. Very fast. PC Gamer isn’t doing at least one of the most obvious, which suggests that instead they’ve thrown cash at keeping it online, with a cluster of computers working on it. Don’t know how a website works? Find out here 2

    1. Source, not related to RPS
    2. All images are Fair Use under the DMCA.


  • The fallacy of bandwidth limits

    Currently, according to mainstream media, bandwidth is defined as the quantity of data you download or upload to the internet over a month. So, for example, your ISP will tell you the maximum bandwidth limit is 100GB. Or whatever.

    That, however, is not it’s true definition. It’s true definition is:
    a data transmission rate; the maximum amount of information (bits/second) that can be transmitted along a channel 1

    This is the secret thing about bandwidth. ISPs don’t care about how much you upload to the web over a given period. We care about how fast you upload it.

    When you pay for a high-level connection to the internet, that you use to connect houses to, or web-serving computers, you do not pay in quantity over time. You pay in speed. So, for example, 1 gigabit per second. If you go over that speed, longer than a allowed ‘burst’ period, you pay an overage charge, always assuming that your network is even capable of going over that speed.

    Think of bandwidth like gas going through a pipe. (Terrible, terrible analogy, I know. But it’s the easiest way to explain.) That gas can only flow so fast, and only so much can be fit in the pipe at any one time. We don’t particularly care if you use 100GB by taking a trickle out of the system at any one time. We do care if you take a torrent.

    Realistically though, customers never notice bandwidth. They’re too busy playing with computer-resource hungry things, like wordpress, to even be able to consume all of their allocated bandwidth. Only very, very rarely do we actually start thinking about bandwidth rather than computing resources. Normally, it’s podcasts. Static file. Almost no server-resources required to send it out onto the internet. But it eats bandwidth. Most are ~50-80Megabytes per episode. You get enough people downloading that simultaneously, and we’re going to start noticing…

    As long as the current trend continues, i.e. the more computing power we have available to provide you with your shiny websites, the more the people creating the shiny websites waste computing power, the mainstream will never notice this secret.

    More often than not, the reason we ask people to upgrade off our shared servers, is not because they’ve reached any arbitrary bandwidth limit, although we may use this as a guide to identify them. It’s because they’re using too much CPU time.

    1. http://wordnetweb.princeton.edu/perl/webwn?s=bandwidth


  • Easy on the eyes

    Just a quick post here…

    Recently my eyes have been a little strained using the computer. I think it probably has something to do with the misplacement of my reading glasses somewhere at University. Hopefully I’ll find them before my Mum finds out and goes nuts lol.

    Anyways to reduce browser related eye strain, I found a handy script for Greasemonkey (in Firefox) that kinda inverts the webpage/makes it less white and a bit easier to read (higher contrast). Its not perfect but it’s a handy hack until I can do some more hunting for my glasses!

    Anyway enough text, here’s the links:

    Invert web page colours (lifehacker)

    Direct link to Greasemonkey script

    Options are customisable, so you can restrict the websites it works on…

    Oh, and here’s a screenshot:

    invert_webpages