Quantcast
Channel: iTWire - Entertainment
Viewing all articles
Browse latest Browse all 4710

Stupid elementary IT mistakes of 2016

$
0
0
Stupid elementary IT mistakes of 2016

If 2016 showed us one thing, it's that people can and will make stupid elementary IT mistakes. Here are just four, and how they could have been avoided.

 

Blood Donor database leak
As a blood donor to the Australian Red Cross Blood Service (ARCBS) myself, this one is particularly saddening to report. Yet, it happened, and it was Australia's largest data breach to date.

The ARCBS became aware on October 26 their outsourced web server partner had allowed data to be exposed through negligence and lack of thought.

Specifically, the web server had anonymous directory browsing enabled, and further, some person had made a backup of the MySQL database - to the very public website folder itself.

All it took was one person trawling the web for directory browsing enabled public-facing websites to discover this. There was no tricky 'hack', no exploitation, no social engineering ... this was someone using the website purely as it was configured, to download files purely as made available.

How do you avoid this? Well, to start, you don't enable directory browsing on your website unless you really, actually, definitely, truly want that. Why would you?

Then, don't save backups to your public_html folder. Why would you do this, except possibly as a convenience to yourself for downloading the backup - but even considering this is not a good approach, to begin with, the file ought to be deleted after downloading. In this case, the developers left it in the public-facing folder.

Hats off to ARCBS who acted swiftly and arranged access to IDCARE for affected persons, but a solid smack to the developers who did it.

{loadposition david08}

Recruitment database leak
What's worse than having a public facing website with a database backup stored within it, and turning on directory browsing? Let me tell you ... it's doing that, and then continuing to do it even after someone else has a widely-published data breach from the very same thing.

This is precisely the case for the global recruitment firm, Michael Page.

One month after the ARCBS data breach, the very same thing happened to Michael Page. The circumstances were identical - a person was trawling the web for public websites with directory browsing enabled, discovered Michael Page had such a site, and exactly like ARCBS the developers chose to store their database backups in this public folder.

As per the above, this is just negligence and laziness and lack of thought on the part of the developers (in Michael Page's case, the responsible party was consulting firm Capgeminihttps://www.capgemini.com.

Yet, it's worse in this situation because news of the ARCBS data breach had been published, including the scenario in how it came to be. Don't Capgemini's team read the news?

Australian Census fiasco
What can we say? The Australian Census could have been a monumental success of online surveying, setting a future path to electronic voting while reducing the manual cost of distributing, collecting and tallying paper-based forms.

Instead, it was a national embarrassment that sparked a Senate enquiry, the report now online.

The Australian Government selected IBM - at a cost of millions of dollars - to provide the Census platform, despite IBM being black-listed by the Queensland Government for its botched health payroll rollout in that state.

IBM implemented its own data centre for this purpose, rather than working with existing scalable online cloud-based platforms that already provide proven and tested elastic load-balanced facilities.

IBM and the Australian Bureau of Statistics allegedly tested the servers for a demand calculated on the assumption people would use the site equally throughout the day, without any consideration for the typical 'Census night' concept that sees most people filling in the form after dinner that evening.

To prevent "overseas attacks", while neglecting to consider Australians overseas, a simple geo-blocking configuration was applied to the router.

Yet, after 7 pm services began to fail due to alleged distributed denial of service attacks - which many online experts believe was simply Australians trying to fill in the census, not some actual attack.

In fact, the Senate enquiry even suggests the alleged denial of service attack didn't even exist, but rather IBM's systems displayed false positives.

IBM chose to reboot its equipment. Yet, IBM had incorrectly configured at least one of its two - yes, just two - routers in place. Part of this incorrect configuration involved changes made to volatile memory, that is the running configuration, which were not committed to non-volatile memory and hence after rebooting the router all these changes were simply lost.

As such, IBM lost its connection to Telstra's network and was now trying to run on a sole Nextgen link.

The ABS didn't help matters, either. Only after 8 pm, the ABS began advising, through social media, the site was experiencing an outage. By 11 pm the ABS stated the Census site would be offline for the rest of the day and would provide an update in the morning. It was not until two days following that the site was re-opened.

Where do we go from here in advocating how to prevent such a recurrence in your enterprise? The errors were many and frequent. Reporting systems failed. Systems were not provisioned correctly. Configurations were not saved to non-volatile memory. Load modelling was flawed. There was insufficient redundancy. The geo-blocking was flawed logic.

The Senate enquiry goes into detail, but for now, let's suffice it to say the biggest problem in this debacle was the trust the ABS gave to IBM. Alistair MacGibbon, Special Advisor to the Prime Minister on Cyber Security, sums it up "In many respects, while I will say to you that this was a failure to deliver on the contractual obligations that IBM had, there was a failure on the part of the ABS to sufficiently check that the contract had been delivered. That could have been achieved through more thorough assessments of the work done for them by IBM and their subcontractors."

John Podesta's emails
Presidential hopeful Hillary Clinton's campaign director had 50,000 emails lifted when a hacking group sent a phishing email on March 19th. In classical phishing fashion, the email insisted there was a problem ("someone unsuccessfully tried to log in") and thus an urgent action was needed ("you need to change your password now") and provided a link to do it ("click here").

Of course, hovering over the link in a phishing email shows the destination is not the site you believe the email is from. Nevertheless, sadly, despite this, people continue to be deceived by phishing emails around the world, and thus they continue to be sent.

Yet, in Podesta's case, his chief of staff had the savvy to write to the Clinton operations team help desk asking if the email was legitimate. A foolish staffer replied “This is a legitimate email. John needs to change his password immediately." As we now know, following this event, tens of thousands of emails from Podesta's account were accessed without authorisation and revealed.

I suppose this is one help-desk staff member who may find future employment difficult. It is very disappointing that the chief of staff did the right thing and requested advice from people paid to be tech savvy, and the response was technically erroneous. We try to teach users not to click on every darn link they get, but what can you do when your own help desk aren't astute enough to recognise phishing?

Of course, why was Podesta using a free mailbox for critical business and sensitive emails? That's the other stupid mistake in this scenario.

What could have prevented this? Firstly, I can't endorse people storing company or sensitive emails in their personal mailboxes. If they don't trust their email administrators or the security of their own systems, then that's a problem to deal with.

Next, all staff need to be trained to stop and think before clicking links. This is particularly painful in this case because the non-technical staff did stop and think! They were not confident the email was genuine and requested support. The help desk staffer grossly let them down by failing to appreciate basic phishing techniques themselves, let alone to question whether Google would really send an email like that, and thirdly by not even hovering over the link to see where it led. This last step is basic and anyone can do it.


Viewing all articles
Browse latest Browse all 4710

Trending Articles