The current legal and regulatory environment has brought effective data management firmly into the spotlight for organisations both large and small and across all industry sectors. As the tsunami of big data continues to gather pace, organisations are becoming more aware that they need ready access to current and historic data in order to respond to the regulators and internal stakeholders alike.
Data must be accurate, accessible and easily searchable. On the flip side, there is increasing pressure to ensure that personal data is stored within the confines of Data Protection legislation. In particular, organisations need to ensure that they delete sensitive information from PCs or mobile devices before they are disposed of or recycled.
This article examines the broad range of risks that today’s organisations face when managing data, from dealing with new phenomena like Ransomware to asking the right questions about cleanroom quality standards.
What can go wrong?
Changes to weather patterns, future energy shortages and unforeseeable natural disasters are putting personal and corporate data at risk. While insurance companies can help buy a new computer if it’s damaged in a floor or fire, they can’t recover lost files. If data is important- and for most people it is- then preparing in advance for the likelihood of future blackouts and power shortages is crucial.
The most effective way to protect data is to back it up regularly- preferably in different storage locations such as the cloud or hard drives. For homeworkers computer equipment should be stored on the top floor to prevent flood damage. Investing in an uninterruptible power supply (UPS) is also a good idea.
Its battery provides electricity power to a computer running for a limited period, enabling work to be saved and the computer to shutdown correctly. This will prevent data loss in the event of a sudden power cut.
A surge protector can also protect equipment from damage when electricity returns by reducing the impact of a voltage surge that occurs at the end of the power cut. This can damage the electronic components inside the equipment.
If a device is damaged by water don’t expose it to heat to dry out! Place the media in a container that will keep it moist. Consult an insurance provider to see if data recovery can be claimed and get it to a specialist. Never use recovery software to make repairs, since this can destroy what was otherwise recoverable data.
Ramsomware is a powerful non-encrypted virus that first originated in Russia in 2005. It holds computers ransom, locking them down until the users pay a fee to disable the virus. With so much money to gain, cyber criminals have found new ways to spread the virus and to cash-in at the expense of victims.
So what do you do when your computer is attacked? The easy answer is nothing. Turn it off and take it to a data recovery specialist. Disabling requires the help of equally knowledgeable professionals since the biggest challenge to recovering the infected data is assessing how the data is scrambled so it can then be deciphered.
Companies should also adopt processes to prevent attacks. Backing-up files and not sharing personal information or files when prompted to do so by malware is vital. Sensitive data should also be encrypted.
Following routine security measures can further safeguard data without making it difficult to recover, such as virus scanning, firewalls and penetration testing. Using adware or a popup blocker can also help.
If a suspected attack occurs, check that data is backed-up. If the back-up fails, find a data recovery expert to assess the seriousness of the attack. Remember, never share data and never make any payments. There’s no antivirus software to fight this malware at the moment, which is all the more reason to be vigilant.
Human error is a big reason for data loss and can have a major impact for businesses that are growing quickly. When a company is new, things usually run smoothly in small teams and there are less IT related issues. Over time, however, as businesses expand and evolve, new employees join while others leave, and gradually there’s an erosion of the IT system knowledge.
Backups might be performed, but usually they are done before new software is upgraded. Not even basic system maintenance is carried out, and financial decisions to renew and replace are put off and delayed. In short, the blame lies with the company itself for not introducing a constant management process regarding IT. And the consequences of these actions, or lack of, can be damaging.
Failed backups can be disastrous for data – and we don’t do enough to prevent them. According to a survey of its Ontrack Data Recovery customers, 60% of respondents claimed they had a backup solution in place at the time of data loss, but the backup was not current or operating properly.
The research also revealed that external HDD backups was still the most used and sought after approach to backing up both business and personal data. Though a great technology for storing information, what can you do if your backups fail? Nothing. The only way to get data back is to back it up properly in the first place in multiple sites frequently and watch out for error messages.
Finally, and perhaps most frustrating of all- is the issue of misplacing devices. How can we ensure that data doesn’t get lost when a hard drive goes missing? Individuals and companies should invest in a backup solution and set up a backup schedule. They should also check that backups are running regularly and in line with regular schedules.
SSD vs HDD
The creation of hard disk drives (HDD) and solid state drives (SDD) have revolutionised the way people and organisations store data. They also present different challenges in recovering and deleting data, due to the complexity in which each drive captures data.
HDD data is stored magnetically, therefore software based erasure is the usual method for destroying HDDs. This procedure writes a pattern of data to each sector of the disk in a continuous manner, overwriting the original data and making it unrecoverable while still leaving the HDD functional.
SSD data is written electronically. The best way to destroy data on SSD drives is through physical media destruction. Shredding the media into small pieces will ensure that every single chip is destroyed. This will prevent data from unwittingly entering the public domain and leading to security and data breaches.
Many companies mistakenly believe that storing data in a virtual environment won’t impact their organisation’s chance of data loss. Yet, 40 percent of companies leveraging virtual storage experienced a data loss from those environments in 2012. The main culprit once again, was the failure to perform data backups correctly and carefully.
Without backups, the scale of data loss is greater than ever before because a data disaster in a virtualised world brings down many servers at once, which comprises storage facilities.
Worse still, if virtualised data is lost, some companies try to rebuild their data instead of calling a data recovery company, which often makes data recovery much harder - and in some situations- even impossible. There’s so much complexity hidden from users and administrators when systems are virtualised so without a solid data recovery programme, it’s very easy to lose data.
Spending on cloud computing services will top nearly £100 billion in 2014 as the amount of data generated by companies and individuals continues to grow rapidly, and reducing storage costs becomes crucially important. This will present challenges for data recovery since administrators will not always know where the cloud is based and how to access user data. Also, most of the data will be encrypted for security reasons and will therefore require expert engineers to recover data if a disaster strikes.
Redundant array of independent disks (RAID) redefined how systems manage data and still remains an impressive innovation. RAID is unique in that it can distribute and replicate data across multiple hard disk drives in one of several ways, known as "RAID levels"- depending on the level of redundancy and performance required. RAID uses redundancy (the storage of data in different places on multiple disks) to reduce failure rates, thereby reducing the likelihood of data downtime.
However, in the event that drives experience a physical failure, the benefit of a RAID array to protect data is compromised. While the fundamental structure of RAID has stayed the same, they have also become larger, both in capacity and in the number of drives used. This has made data recovery increasingly challenging because information is stored on so many levels and disks.
Complicating things further is the recent trend to make RAID groups, which are RAIDs made out of multiple RAIDs rather than single disks. More RAIDS equal more headaches when it comes to data recovery.
Corporate data is now stored on a variety of mobile devices, presenting incredible challenges for data protection. The popularity of bringing your own device (BYOD) continues to grow as does the amount of company information stored on home PC and tablets. This trend, however presents new risks, such as employees losing devices or smartphones breaking, causing the loss of corporate data. Organisations must factor these new risks into their data recovery planning.
Phil Bridge is the managing director of Kroll Ontrack.