Tuesday, 26 July 2016

People Farming



The original World Wide Web, invented by Tim Berners-Lee back in 1989, was seen by many as a utopian design. A web that used open protocols and was decentralised meant anyone with a bit of knowledge could host a website. This decentralised web mainly ran on individual machines in bedrooms, offices or universities.

Fast forward through Google, Hotmail, Gmail and Facebook…

Today we have a very different web.  Despite the web being easier to use (you no longer require so much specialist knowledge), we see a web that has morphed from many decentralised individuals towards a web that clusters around a handful of centralised global corporates. Said companies spend fortunes on siphoning up as much of your data as possible and as a result hold a monopoly on your data. And let’s face it, this model tends to work; these companies are usually the ones that are actually selling the services we want to buy - Amazon, Uber, Airbnb.

But what does “People Farming” actually mean to us?
  • Companies like Google now want to mine our health records[1] to build up even more detailed profiles on us - how long before insurance is only available to the healthy elite?
  • PRISM, the US government surveillance programme revealed by Snowdon, allows the government bodies to walk into any service provider and demand they hand over all the data they hold on you.
  • And how safe is our data? Web servers, which are often in very poorly protected server farms, are honeypots to hackers…another day another password list for sale. 

These companies are able to offer you a “free” service, because the product they are selling is YOU. Your data is their profit. This seems to favour them more than it does us.

So what is being done to change this?
Solid[2] is a new web paradigm. In a project being worked on by Tim B-L, the idea is to separate our data from the servers and applications that process it. Effectively this should allow the user to take back control on their data. No longer will we be tied into LinkedIn, Twitter or Facebook, simply because we cannot leave them as they hold all our data. Maidsafe[3] want to take this one step further - by removing these servers altogether. In Maidsafe’s network, files are (encrypted and) split across multiple devices, meaning no one device holds all your data - taking things back towards a distributed model rather than centralised individual nodes.

But today’s web is built on convenience. The fear is that decentalisation will take us back to a less convenient web. Maybe not; today we have better software, open standards, better interfaces and better applications all of which should be able to bring the convenience we are used to, whilst hiding this underlying decentralised platform from us. Maybe one day we can get back to that utopia - a web where individuals own and control their own data.

This post was inspired by a New Scientist article: “The web we want”, July 2016.

Some further reading:


Tuesday, 21 June 2016

JavaScript Malware



The recently discovered RAA malware[1] is a ransomware coded entirely in Javascript. Similar to other ransomware, RAA encrypts a users files (including .doc, .xls, . pdf, .jpg, .png, zip, .rar, .csv [2]) using AES-256, then demands payment of $250 for a decryption key. To date the ransomware has mainly targeted Russian language devices, but not exclusively. The malware is delivered via an email attachment that executes upon opening.

So what makes RAA different, and why should we be concerned?
RAA is encoded entirely in JavaScript. It is probably not the first malware to be written in a scripting language, and this certainly won’t be the last JS malware. But it is not every day we see a malware written in JS. JS is a scripting language that has many uses, but is primarily used to add dynamic content to a static HTML. Working with HTML and CSS, JS is what makes the web interactive. Unlike a compiled .exe file that needs to be executed by the user to run, JS is an interpreted language so is executed automatically and without a warning – certainly in a web browser. At the moment RAA is delivered via an email attachment, but what most of the press is not saying is that, theoretically, there is nothing to stop this being ported for delivery via JS in a web browser. As soon as you land on a malicious page, JS will execute as part of the page content.  

So what about mitigation? Whilst not currently a game stopper, RAA certainly has potential to become more vicious. Whilst this malware stays as an email attachment, normal rules apply – keep your AV up to date, and beware of the sites you visit and maintain regular backups. Some email software automatically blocks JS attachments, so keep your email packages up to date. Once this malware ports to a web page delivery mechanism the current mitigation is to disable JS in your browser – keeping you safe, but without the web content and interactivity you expect. But what will we do when RAA is delivered via JS in a stored XSS?


[1] http://www.bbc.co.uk/news/technology-36575687
[2] https://www.enigmasoftware.com/raaransomware-removal/

Saturday, 7 May 2016

I spy with my little Google…



Big Corp tracks our every digital footstep – tell me something new! As someone who values what little privacy we have left, I take many steps towards actively minimising my on-line footprint. But just how much data do companies hold on us? I guess the real answer is we will never know, but I recently stumbled upon the history Google holds about me: https://history.google.com/history.

First off when visiting this site, Google reassures me is that only I, as site owner, can see this history data. Ok, that’s a good start, but I question this statement. It is only true if you don’t include Google themselves, any associated marketing/advertising partners and government departments with a warrant for Google to hand over this data.

So what do Google (admit to) track? My Web and App activity – going back to 2009 when I opened my account, showed that, apparently, my second most searched item of all time is www.indeed.co.uk. This turns out to be a job site which it is possible I have visited in the past, but this site made no lasting impression on me because I don’t remember it. My third most visited site was monsterhunter.wikia.com. I tried to visit this site as again I have no lasting memory of it, but the site has long since closed down. Ok, if that is truly the accuracy of information they have on me, so far I have little to be concerned about. Next comes Voice and Audio Activity – which is empty as I have never knowingly used this feature. Next is my Device Information – empty, Location History – again empty. Finally comes YouTube Watch History and YouTube Search History. In all, Google had very little information on me because, as said above, I take active measures to reduce my footprint. However, if I allowed them, they could hold a very detailed profile – where I have been, what I have done, and indeed how I did it.

So what can I do about this? I believe Google can only track this information when I am logged into my Google account. For a while now I have been using a Firefox app called Google sign out – which must work as Google have little information about me since installing it. Google is not the only search engine, and for several years I’ve been using DuckDuckGo. DuckDuckGo suits my purposes, but its engine is a long way behind Google. It does however allow an easy fallback to search via Google when their search engine does not find appropriate results, so staying logged out of my Google account should minimise my footprint.

There was no easy way to delete my search history. Instead I had to highlight each individual search undertaken during the past X years in order to delete it. This could be painful for a heavy Google user - but this did provide an look back over the past few years of my searches. Interestingly, part way through deleting the search history a large number of deleted items re-appeared; indicating that all deleting achieves is removing the ability for me to know what my history is, rather than any actual deletion off a server. An interesting learning point from deleting my history was just how unmemorable most sites actually are. I’d almost go as far to say that if you need to search for it, you probably don’t need it.

I was quite impresses on what Google allows you to opt out of (although the interface looks designed to confuse rather than aid). By selecting Activity I was able to pause almost all data collection (apparently). They also allow editing of some advertising information, such as stopping ads tailored to interests. There was also a setting to opt out Google’s DoubleClick cookies, but this requires downloading an app; sorry Google – I don’t trust you enough to download this.

For me, this exercise was about reducing the information held about me on a server somewhere. Maybe I am missing something, but I fail to see the benefit to me of having my searches saved. Do I believe the last hour spent deleting my digital history means my data really has been physically deleted? No, not really. Clearly to Google this information is about advertising, to the government it’s about building profiles on people. But for me, this is about trust. I appreciate that sites rely on advertising for revenue – and let’s not bite the hand that feeds, this blog is powered by Google and we already know you don’t get anything for free. Companies spend a lot of money on trying to build our trust with varying degrees of success. But when companies such as Google are allowed access to NHS records[1], if the ethics of Big Corp retaining masses of data on individuals is not questioned, than we might as well hand over our keys and go home.


[1] http://uk.businessinsider.com/google-exec-defends-nhs-data-sharing-agreement-2016-5?op=1?r=US&IR=T

Saturday, 20 February 2016

Just what does the FBI want Apple to do?



Within the last few days US courts have issued a demand that Apple provide investigators access to data on the phone belonging to San Bernardino gunman Syed Rizwan Farook, which the FBI says contains critical data. So far Apple has complied with subpoenas for data requested by the FBI in the San Bernardino case. It appears that, within 24 hours of being in custody, the suspect iPhone had its Apple ID password changed preventing forensic teams from accessing backup information.[1]


Apple are not in a position to actually break the encryption on the iPhone. Because of this, the US government, using an obscure law from 1789 called the “All Writs Act”, has ordered Apple to produce a new iOS which introduces the ability to input the phone's four-digit passcode “electronically” and prevent the iPhone from erasing data after 10 failed logins. Essentially the request is to open up the iPhone to a brute force. The FBI insist that this is a one-off back-door for this particular iPhone, and  Apple can work on the phone in their own HQ to reduce the risk of the new iOS being released to the world.


A one-off back-door for a single iPhone might not be a big ask; but it sets a precedent to become a regular occurrence. Once the US government has a back-door to iPhone user’s private information, how long before sophisticated hackers make use of this backdoor?

 
Earlier this week, writing in an open letter, Apple CEO Tim Cook wrote: "We oppose this order, which has implications far beyond the legal case at hand…Customers expect Apple and other technology companies to do everything in our power to protect their personal information, and at Apple we are deeply committed to safeguarding their data. Compromising the security of our personal information can ultimately put our personal safety at risk. That is why encryption has become so important to all of us."[2]


A few days ago John McAfee, as in Anti-virus software company McAfee, apparently came to Apple’s aid making his own offer to the FBI saying his team will decrypt the information on said iPhone within 3 weeks using social engineering techniques, free of charge[3]. Just how he intends to do this is unclear, but then a US presidential candidate never rejects free publicity in an election year.