6 cybersecurity predictions (that might actually come true)

Készült: 2017. október 18
Nyomtatás

October is National Cybersecurity Awareness Month (NCSAM) and this week’s theme is Today’s predictions for tomorrow’s internet.

And that presented us with a bit of a problem.

At Naked Security we’re big fans of NCSAM but we aren’t fans of predictions. Or at least not the popular, blue sky kind that sees every glitch, failure and fumble as a sign of the impending digital Pearl Harbour. So we decided to support week three of NCSAM with some predictions but we’re doing it our way – by taking the “tomorrow’s internet” part literally.

We asked a number of people working in different technical roles at Sophos where they’re actually planning to spend some of their time and energy in the next six months.

So here are our “from the trenches” predictions that reflect what people are actually preparing for. We’re preparing for them to come true, maybe you should too.

1. More file-less attacks

Principal Threat Researcher 2, Fraser Howard:

To date file-less attacks have been fairly isolated, but it seems to be growing in prominence (Poweliks, Angler for a bit, Kovter and more recently Powmet). This is a natural response to the widespread deployment of machine learning.

I also expect to see a rise in Powershell abuse.

2. Smarter fuzzing for everyone

Senior Security Analyst 2, Stephen Edwards:

I’m expecting the sophistication of fuzzing to improve significantly. Fuzzing can be used to automatically create billions of ‘stupid’ tests and  the next challenge is to make those tests smarter, by informing the test creation process with knowledge about how a program works.

Automatic exploration of code is hard though.

Hybrid techniques try to balance the speed of stupid tests with the efficiency of smarter ones, while avoiding getting lost in too many choices.

A number of promising approaches to improving fuzzing have already been demonstrated and it feels to me that we’re almost at a breakthrough where those different techniques will be combined and made public.

Stephen provided such a long and detailed response to our question we published it as a full article too. It’s called Is security on the verge of a fuzzing breakthrough?

3. Ask who and what, not where

Cybersecurity Specialist, Mark Lanczak-Faulds:

Traditionally, security focuses on the domain as a whole. As we look to blur the boundaries of a traditional network and the internet, what matters are the identities and assets residing within the domain.

We need to determine risk based on identity and the assets associated with that identity. When you trigger an alert accounting for those factors, you know what’s at stake and can act proportionately and swiftly.

4. Focus on exploit mitigation

Sophos Security Specialist, Greg Iddon:

Patching is no longer something you can save until after change freeze or a rainy day.

I think that in the next six to twelve months, implementing exploit mitigation – protection against the abuse of known or unknown bugs and vulnerabilities, and the underlying way attackers take advantage of these bugs and vulnerabilities – is going to be key to staying ahead.

What concerns me most is that there is a swathe of new vendors who are only focussing on detection of Portable Executable (PE) files, touting machine learning as the be-all and end-all of endpoint security. This simply isn’t true.

Don’t get me wrong, machine learning is great, but it’s just a single layer in what must be a multi-layered approach to security.

5. Ransomware repurposed

Global Escalation Support Engineer, Peter Mackenzie:

Based on some trends we’re seeing now I think we could see a shift in the way that ransomware is used.

Unlike most other malware, ransomware is noisy and scary – it doesn’t work unless you know you’ve got it, and it has to make you feel afraid. As security tools get better at dealing with ransomware, some attackers are using that noisyness as a technique for hiding something else, or as last resort after making money off you another way using, say, key loggers or cryptocurrency miners.

Once you’ve removed the noisy ransomware infection it’s easy to think you’ve cleaned your system. What you need to ask is “why did it detonate now?” and “what else was, or still is, running on the computer where we found the ransomware?”.

6. Data is a liability, not an asset

Senior Cybersecurity Director, Ross McKerchar:

I expect to spend a lot of time in the next 6 months deleting unnecessary data and generally being very careful about what we store and where. It’s a defence in depth measure – the less you store the less you have to lose.

This applies across entire companies but, probably more importantly, on exposed assets such as web servers too. They should only have access to the minimum amount of data they need and nothing more. Why does a web server need to have access to someone’s SSN, for example? You may need it for other reasons, or your web server may need to collect an SSN once, but does it need to keep it?

That’s enough from us, we’d love to read your predictions in the comments below.

Source: Naked Security

 

Is security on the verge of a fuzzing breakthrough?

Készült: 2017. október 18
Nyomtatás

October is National Cybersecurity Awareness Month (NCSAM) and this week’s theme is Today’s predictions for tomorrow’s internet.

Naked Security asked me for a “from the trenches” prediction – a prediction rooted in something practical, where I’m already preparing to spend some time and energy in the next six months.

I’m expecting fuzzing to remain an important technique in security testing, and for the sophistication of fuzzing to improve significantly.

What is fuzzing?

Fuzzing is fundamentally an automated code testing technique. It can be applied to find security problems by throwing vast amounts of tweaked and permuted (fuzzed) inputs at an application and monitoring for conditions with known security implications.

People can write clever tests, but not very many in one day. Fuzzing automates the process of test creation and so it can produce vastly more tests than a person can. Typically each test is quite stupid though, perhaps attempting to provoke the code into an exception or crash with nothing more than random input.

The raw speed of fuzzing compensates for the low odds of an individual test actually finding anything.

Commodification

If you want to run millions of tests (or more – I try to test our engine for billions of iterations in each area I consider), then you need dedicated hardware, ideally lots of it.

Recently, large companies like Google and Microsoft have been trying to make it easier to do fuzzing at scale whilst also packaging it up as an accessible service.

Fuzzers that individuals can easily get running have also been rapidly improving, with the open source American Fuzzy Lop (AFL) being the standout player for me.

AFL describes itself as:

a security-oriented fuzzer that employs a novel type of compile-time instrumentation and genetic algorithms to automatically discover clean, interesting test cases that trigger new internal states in the targeted binary. This substantially improves the functional coverage for the fuzzed code.

Fuzzing can be used as a black box technique (working without access to an application’s source code), so as it becomes more accessible to you it becomes more accessible to your adversaries too. That alone is reason enough to start.

Smarter fuzzing

One way to make fuzzing more accessible and efficient is to make it less stupid. This normally involves using knowledge of how a program works and how bugs can occur to influence the process of automated test creation.

Automatic exploration of code is hard though. Sophisticated computer programs have so many possible execution paths that attempting to trace them all causes a rapid “explosion” in complexity (known as a combinatorial explosion). There are simply too many possibilities even for a computer to cope with. (How the code is explored is a detailed topic beyond the scope of this article, but if you want to go down that rabbit hole, start with symbolic execution and then perhaps compiler transformation).

Hybrid techniques try to balance the speed of stupid tests with the greater efficiency of smarter ones, while avoiding getting lost in too many choices.

The recent winner of a $2 million cyber security prize used one such approach: concolic execution. That work, however, was sponsored at least in part by the USA’s Defense Advanced Research Projects Agency (DARPA), and is not likely to be released publicly anytime soon (the goal of the challenge was to automate writing exploits…)

A breakthrough?

As code gets harder to understand, the volume of code written each year increases, and as more and more of our lives touch computers in some way, the use of automation to find bugs will only increase in importance.

A number of promising approaches to improving fuzzing have already been demonstrated and it feels to me that we’re almost at a breakthrough where those different techniques are combined and made public – providing any developer with the opportunity to efficiently find bugs during development, before they cause problems.

The most promising tools that I know of come from Shellphish, but I don’t think they’re yet accessible enough to count as the breakthrough I’m hoping for.

Source: Naked Security

 

Encryption chip flaw afflicts huge number of computers

Készült: 2017. október 18
Nyomtatás

Researchers have discovered a serious vulnerability in Infineon Trusted Platform Module (TPM) cryptographic processors used to secure encryption keys in many PCs, laptops, Chromebooks and smartcards.

An early warning something might be up emerged on 30 August 2017 when the Estonian Information System Authority (RIA) issued an alert about a “theoretical” problem affecting 750,000 national ID cards issued after October 2014.

The RIA didn’t go into detail but the fact that cancelling the country’s national elections was floated had security people worried.

Last week we got confirmation from Infineon that the problem was serious enough to demand firmware updates from computer vendors, including HP, Fujitsu, Lenovo, Acer, Asus, LG, Samsung and Toshiba.

In cryptographic terms, this one’s a biggie: a flaw in the way the public key encryption key pair is generated makes it possible for an attacker to work out private 1024-bit and 2048-bit RSA keys stored on the TPM simply by having access to the public key.

According to the researchers, a factorisation attack based on the “Coppersmith” method on a 512-bit key could at worst be achieved on Amazon Web Services (AWS) in 2 CPU hours at a cost of fractions of a cent, on a 1024-bit key in 97 CPU hours for $40-$80, and on 2048-bit in 140.8 CPU hours for $20,000-$40,000.

That probably still puts attacks against 2048-bit keys out of the range of all but the most serious attackers. 1024-bit keys have also been regarded as too weak for some time – security strength guidelines published by the US National Institute of Standards and Technology (NIST) has graded 1024-bit RSA keys “disallowed” since the start of 2013.

Explained the researchers, who will present more information at this month’s ACM CCS conference:

The currently confirmed number of vulnerable keys found is about 760,000 but possibly up to two to three magnitudes more are vulnerable.

Do Trusted Platform Modules matter?

First introduced in 2009, a TPM is a cryptographic chip standard built on to the motherboard of many (but by no means all) PCs and laptops as a secure place to store system passwords, certificates, encryption keys and even biometric data.

The principle is simple: storing keys inside the TPM is a lot better than keeping them on the hard drive or letting them be managed by the operating system, both of which can be compromised.

Microsoft’s BitLocker uses a TPM. They can also be used for authentication (checking a PC is the one it claims to be) and attestation (that a system’s boot image hasn’t been tampered with), for example on Google’s Chromebooks.

The vulnerability was first reported to Infineon in February this year, but the headache now is working out which devices are (or are not) affected.

Many computers, especially older ones, don’t have TPMs and others use chips from vendors other than Infineon.

Windows users can check for the presence of a TPM by typing Win+R to open Run followed by the command tpm.msc (if one is not present you’ll see a message stating this), with the manufacturer code stated at the bottom of the dialogue box. This interface can also be used to regenerate keys, which might be necessary at some point.

Beyond that, the best place to start assessing the flaw’s impact is on the website of the affected vendor and Microsoft’s help page.

According to the latter, what is now designated CVE-2017-15361 was given a “workaround” update in last week’s monthly Windows patch update, which should be applied before any firmware update from the TPM maker.

And it’s not just PCs: a labyrinth of other devices could also be caught up in the issue, for example around 2% of YubiKey hardware tokens. Likewise, Google Chromebooks, almost all of which seem to use Infineon’s TPM but will, thankfully, update automatically without user intervention.

Sophos products that manage BitLocker encryption on affected hardware may be impacted. Sophos customers should check Knowledge Base article 127650 for information.


Source: Naked Security

 

5 cybersecurity predictions (that might actually come true)

Készült: 2017. október 18
Nyomtatás

October is National Cybersecurity Awareness Month (NCSAM) and this week’s theme is Today’s predictions for tomorrow’s internet.

And that presented us with a bit of a problem.

At Naked Security we’re big fans of NCSAM but we aren’t fans of predictions. Or at least not the popular, blue sky kind that sees every glitch, failure and fumble as a sign of the impending digital Pearl Harbour. So we decided to support week three of NCSAM with some predictions but we’re doing it our way – by taking the “tomorrow’s internet” part literally.

We asked a number of people working in different technical roles at Sophos where they’re actually planning to spend some of their time and energy in the next six months.

So here are our “from the trenches” predictions that reflect what people are actually preparing for. We’re preparing for them to come true, maybe you should too.

1. More file-less attacks

Principal Threat Researcher 2, Fraser Howard:

To date file-less attacks have been fairly isolated, but it seems to be growing in prominence (Poweliks, Angler for a bit, Kovter and more recently Powmet). This is a natural response to the widespread deployment of machine learning.

I also expect to see a rise in Powershell abuse.

2. Smarter fuzzing for everyone

Senior Security Analyst 2, Stephen Edwards:

I’m expecting the sophistication of fuzzing to improve significantly. Fuzzing can be used to automatically create billions of ‘stupid’ tests and  the next challenge is to make those tests smarter, by informing the test creation process with knowledge about how a program works.

Automatic exploration of code is hard though.

Hybrid techniques try to balance the speed of stupid tests with the efficiency of smarter ones, while avoiding getting lost in too many choices.

A number of promising approaches to improving fuzzing have already been demonstrated and it feels to me that we’re almost at a breakthrough where those different techniques will be combined and made public.

3. Ask who and what, not where

Cyber Security Specialist, Mark Lanczak-Faulds:

Traditionally, security focuses on the domain as a whole. As we look to blur the boundaries of a traditional network and the internet, what matters are the identities and assets residing within the domain.

We need to determine risk based on identity and the assets associated with that identity. When you trigger an alert accounting for those factors, you know what’s at stake and can act proportionately and swiftly.

4. Focus on exploit mitigation

Sophos Security Specialist, Greg Iddon:

Patching is no longer something you can save until after change freeze or a rainy day.

I think that in the next six to twelve months, implementing exploit mitigation – protection against the abuse of known or unknown bugs and vulnerabilities, and the underlying way attackers take advantage of these bugs and vulnerabilities – is going to be key to staying ahead.

What concerns me most is that there is a swathe of new vendors who are only focussing on detection of Portable Executable (PE) files, touting machine learning as the be-all and end-all of endpoint security. This simply isn’t true.

Don’t get me wrong, machine learning is great, but it’s just a single layer in what must be a multi-layered approach to security.

5. Ransomware repurposed

Global Escalation Support Engineer, Peter Mackenzie:

Based on some trends we’re seeing now I think we could see a shift in the way that ransomware is used.

Unlike most other malware, ransomware is noisy and scary – it doesn’t work unless you know you’ve got it, and it has to make you feel afraid. As security tools get better at dealing with ransomware, some attackers are using that noisyness as a technique for hiding something else, or as last resort after making money off you another way using, say, key loggers or cryptocurrency miners.

Once you’ve removed the noisy ransomware infection it’s easy to think you’ve cleaned your system. What you need to ask is “why did it detonate now?” and “what else was, or still is, running on the computer where we found the ransomware?”.

That’s enough from us, we’d love to read your predictions in the comments below.

Source: Naked Security

 

Internet of Ships falling down on security basics

Készült: 2017. október 18
Nyomtatás

We may not think of ships as industrial control systems (ICS). But, according to Ken Munro, a security researcher with the UK-based Pen Test Partners, we should.

Those who operate them should as well, he said in a blog post summarizing a talk he gave at a conference in Athens, Greece on how easy it is to hack ships’ communication systems. While they may not have physical leaks, they are catastrophically porous when it comes to cyber security.

The same history that has led to poor security in land-based ICSs applies to ships, he wrote – they used to run on “dedicated, isolated networks,” and therefore were not at risk from online attacks. But no more:

Now ships: complex industrial controls, but floating. Traditionally isolated, now always-on, connected through VSAT, GSM/LTE and even Wi-Fi. Crew internet access, mashed up with electronic navigation systems, ECDIS, propulsion, load management and numerous other complex, custom systems. A recipe for disaster.

And there are multiple ways for disaster to happen – most of them due to a failure to practice what regular Naked Security readers will recognise as security basics.

Simply by using Shodan, the search engine that indexes internet connected devices, Munro found marine equipment all over the world. For one of the major maritime satcom (satellite communication) vendors, Immarsat, he found, “plenty of logins for the Globe Wireless over plaintext HTTP,” along with evidence that the firmware of many of their older comm boxes was, as he put it, “dated.”

Another example, the Cobham Sailor 900 satellite antenna, was “protected” from a malicious attacker by the unique, complex username and password combo of: admin/1234.

As Catalin Cimpanu of Bleeping Computer noted, a public exploit already exists for that antenna, “that makes hacking it child’s play for any knowledgeable attacker.” He added that such antennas are not only found on container and passenger ships, “but also on navy and private security boats,” plus helicopters and airplanes.

But, where things “got a bit silly” for Munro was when he discovered a collection of KVH terminals that not only lacked TLS encryption on the login, but also included the name of the vessel plus an option to “show users.” Munro’s reaction: “WTF??”

That option gave up a list of the members of the crew online at that point. He added that spending a moment on Google yielded the Facebook profile of the deck cadet who he had spotted using the commbox.

Simple phish, take control of his laptop, look for a lack of segregation on the ship network and migrate on to other more interesting devices.

Or simply scrape his creds to the commbox and take control that way.

It shouldn’t be this easy!

These flaws are not just now being discovered. They have been noted for years. More than four years ago, in April 2013, security firm Rapid7 reported that in just 12 hours they were able to track more than 34,000 ships worldwide using the maritime protocol Automatic Identification System (AIS).

Using those AIS receivers, it reckoned:

…we would probably be able to isolate and continuously track any given vessel provided with an MSSI number. Considering that a lot of military, law enforcement, cargo and passenger ships do broadcast their positions, we feel that this is a security risk.

And Munro’s research found that things have only gone downhill since – in the past four and a half years, the number of exposed ships has increased.

But Munro has some (rather depressingly familiar) recommendations for both civilian and military mariners: Start practicing the basics.

  • Update satcom boxes immediately.
  • Implement TLS on all satcom boxes.
  • Increase password complexity, especially for high-privilege accounts.

He concluded:

There are many routes on to a ship, but the satcom box is the one route that is nearly always on the internet. Start with securing these devices, then move on to securing other ship systems.

Source: Naked Security

 

Google Home Mini glitch triggers secret recordings

Készült: 2017. október 17
Nyomtatás

The privacy glitch that befell Google’s new £49 ($49) Home Mini speaker last week was small but, critics might suggest, still revealing.

The trouble started when journalist Artem Russakovskii, who had been given a review unit at the launch event on 4 October, noticed that the Mini kept turning itself on even when not commanded to.

Deciding to search for clues in the device’s logs, he got a shock:

I opened it up, and my jaw dropped. I saw thousands of items, each with a Play button and a timestamp.

The Mini, it seemed had recorded and uploaded to Google every sound detected in its vicinity for a two-day period, which seemed to be every sound no matter how inconsequential. It even activated after a simple knock on the wall.

This behaviour could be disabled and recordings deleted but only at the expense of harming the system’s future voice recognition accuracy.

What on earth was the Mini playing at?

According to Google, the device had malfunctioned because of a physical problem with the touch panel, which was designed to allow owners to activate recording without using the “OK Google” voice command.

Although this only affected review units handed out during press launches, the company decided to disable the touch feature on all Minis by way of a software update. This process started on 7 October (a day after the errant recording was brought to its attention) and was due to be completed by 15 October.

Concluded Russakovskii:

My Google Home Mini was inadvertently spying on me 24/7 due to a hardware flaw. Google nerfed all Home Minis by disabling the long-press in response, and is now looking into a long-term solution.

For most owners, the usability of the Mini (which doesn’t go on official sale until 19 October), will be unaffected by the software change.

As to its image of the Mini, that might be a bigger issue.

Although resembling a small speaker, the Mini is really a sensor that integrates into Google’s Home platform, which sends the voice commands or questions it receives to a remote server.

Although these activate only after they detect a command such as “OK Google”, by design they are listening all the time in expectation of this.

The system also relates commands to an individual user account. Google allows account holders to control this data as well as mute the Mini, but users must remember to do this. Many probably won’t.

The privacy implications of this system are obvious even as Google dismissed worries that this is just a new form of surveillance dressed up as something useful:

We take user privacy and product quality concerns very seriously. Although we only received a few reports of this issue, we want people to have complete peace of mind while using Google Home Mini.

The incident has echoes Amazon’s troubles when earlier this year it found itself fending off a police request to access recordings made by its Echo speaker in connection with a murder investigation.

Google’s Mini will doubtless probably still sell well – this isn’t another example of the privacy arguments that helped sink Google Glass. But the last thing the company needs is to add fuel to the idea that these devices are, however inadvertently, gateways to a new era of home surveillance.


Source: Naked Security

 

The fix is in for hackable voting machines: use paper

Készült: 2017. október 17
Nyomtatás

Want better security of election voting results? Use paper.

With the US almost halfway between the last national election and the 2018 mid-terms, not nearly enough has been done yet to improve the demonstrated insecurity of current electronic voting systems. Multiple experts say one obvious, fundamental move should be to ensure there is a paper trail for every vote.

That was a major recommendation at a panel discussion this past week that included representatives of the hacker conference DefCon and the Atlantic Council think tank, which concluded that while there is progress, it is slow.

The progress includes the designation of voting systems as critical infrastructure by the Department of Homeland Security, plus moves in Texas and Virginia to improve the security of their systems by using paper.

Most states already do that. But Lawrence Norden, co-author of a September 2015 report for the Brennan Center for Justice titled “America’s Voting Machines at Risk,” wrote in a blog post last May for The Atlantic that 14 states, “including some jurisdictions in Georgia, Pennsylvania, Virginia, and Texas – still use paperless electronic voting machines. These systems should be replaced as soon as possible.”

There is little debate about the porous nature of electronic voting systems – it has been reported for years. It was close to four years ago, in January 2014 that the bipartisan Presidential Commission on Election Administration (PCEA) declared:

There is an impending crisis … from the widespread wearing out of voting machines purchased a decade ago. … Jurisdictions do not have the money to purchase new machines, and legal and market constraints prevent the development of machines they would want even if they had funds.

A couple of years later the Brennan Center issued its report, which predicted that in the 2016 elections, 43 states would be using electronic voting machines that were at least 10 years old – “perilously close to the end of most systems’ expected lifespan.”

The biggest risk from that, the report said, was failures and crashes, which could lead to long lines at voting locations and lost votes. But it also said security risks were at unacceptable levels:

Virginia recently decertified a voting system used in 24 percent of precincts after finding that an external party could access the machine’s wireless features to “record voting data or inject malicious data.

Smaller problems can also shake public confidence. Several election officials mentioned “flipped votes” on touch screen machines, where a voter touches the name of one candidate, but the machine registers it as a selection for another.

Not to mention that with solely digital voting machines, there is no way to audit the results.

While there is still no documented evidence that hostile nation states – mainly Russia – have been able to tamper directly with election results, the risk is there. At this past summer’s DefCon conference, one of the most high-profile events was the so-called Voting Village, where Wired reported that, “hundreds of hackers got to physically interact with – and compromise – actual US voting machines for the first time ever.”

The reason it hadn’t been done before, at least publicly, was that it was illegal. But at the end of 2016, an exemption to the Digital Millennium Copyright Act finally legalized hacking of voting machines for research purposes.

Not surprisingly, hackers didn’t have all that much trouble – they found multiple ways to breach the systems both physically and with remote access. And according to Jake Braun, a DefCon Voting Village organizer and University of Chicago researcher, the results undermined the claim that the decentralized voting system in the US (there are more than 8,000 jurisdictions in the 50 states) would make it more difficult to hack.

With only a handful of companies manufacturing electronic voting machines, a single compromised supply chain could impact elections across multiple states at once, he noted.

It’s not just tampering with actual voting results that can damage the credibility of an election either. Norden told Wired that, “you can do a lot less than that and do a lot of damage… If you have machines not working, or working slowly, that could create lots of problems too, preventing people from voting at all.”

Norden doesn’t dismiss the need for technology improvements. “Among the wide variety of solutions being explored or proposed are use of encryption, blockchain, and open source software,” he wrote in his blog post.

But the most effective security measure, he contended in his blog post, is low-tech:

The most important technology for enhancing security has been around for millennia: paper. Specifically, every new voting machine in the United States should have a paper record that the voter reviews, and that can be used later to check the electronic totals that are reported.

This could be a paper ballot the voter fills out before it is scanned by a machine, or a record created by the machine on which the voter makes her selections—so long as she can review that record and make changes before casting her vote.

That kind of improvement doesn’t have to take a lot of time or cost big bucks either, he said, and would create, “software independent” voting systems, where an, “undetected change or error in its software cannot cause an undetectable change or error in an election outcome.”

Given what are sure to be continued attempts at foreign interference in US elections, “it is the least we can do,” he said.

Source: Naked Security

 

Flash 0-day in the wild – patch now!

Készült: 2017. október 17
Nyomtatás

This past Patch Tuesday, Adobe released, well, nothing. Given that the past few months of Adobe Patch Tuesdays have been gradually diminishing, perhaps some of us thought these Flash-related patches were going the way of the dodo.

Alas, it was wishful thinking.

Six days after Patch-Tuesday-that-wasn’t, Adobe has released an out-of-band patch for Flash in response to a zero-day vulnerability that’s being exploited in the wild.

This Flash vulnerability, CVE-2017-11292, could allow remote code execution, and is rated as Critical. It affects Flash both in browsers and on desktop players, on Windows, Mac, Linux, and Chrome OS.

Adobe notes that this vulnerability is being exploited in the wild, specifically by a criminal group that has previously used other Flash vulnerabilities to carry out their attacks.

Sophos disrupts the attack by blocking the URL that malware is downloaded from, and by detecting the malware itself as Mal/Generic-S.

Nevertheless, if you’re still using Adobe Flash, you should patch right away.

But better yet, get rid of Flash altogether (if you can).

Even Adobe knows that its beleaguered media player’s days are numbered. Browser vendors have been trying to sweep it further and further under the rug for years and in July Adobe announced that it was finally pulling the plug.

By the end of 2020.

Given this progress, and in collaboration with several of our technology partners – including Apple, Facebook, Google, Microsoft and Mozilla – Adobe is planning to end-of-life Flash. Specifically, we will stop updating and distributing the Flash Player at the end of 2020 and encourage content creators to migrate any existing Flash content to these new open formats.

There’s another forty or so Patch Tuesdays between now and then.

Flash’s days are very numbered but it’s having an agonising, protracted exit. For everyone’s sake its demise really can’t come soon enough. Adobe’s waiting until 2021, you don’t have to.


Source: Naked Security

 

Wi-Fi at risk from KRACK attacks – here’s what to do

Készült: 2017. október 16
Nyomtatás

News of the week – and it’s still only Monday – is a Bug With An Impressive name (and its own logo!) called the KRACK Attack.

Actually, there are several attacks of a similar sort discussed in the paper that introduced KRACK, so they’re more properly known as the KRACK Attacks.

These KRACK Attacks mean that most encrypted Wi-Fi networks out there are not as secure as think.

KRACK works against networks using WPA and WPA2 encryption, which these days covers most wireless access points where encryption has been turned on.

An attacker in your midst (at least, within Wi-Fi range) could, in theory, sniff out at least some of the encrypted traffic sent to some of the computers in your organisation.

Even if an attacker can only “bleed off” small amounts of traffic, in dribs and drabs, the end result could be very serious.

(If you remember the Firesheep attack of 2010, just bled a few bytes of data when you connected to Facebook or Twitter was enough to let a crook clone your connection and access your account for as long as you stayed logged in.)

KRACK in a few words

KRACK is short for Key Reinstallation Attack, which is a curious name that probably leaves you as confused as we felt when we heard about it, so here’s our extremely simplified explanation of what happens (please note this explanation covers just one of numerous flavours of similar attack).

At various times during an encrypted wireless connection, you (the client) and the access point (the AP) need to agree on security keys.

To do so, a protocol known as the “four-way handshake” is used, which goes something like this:

  1. (AP to client) Let’s agree on a session key. Here’s some one-time random data to help compute it.
  2. (Client to AP) OK, here’s some one-time random data from me to use as well.

At this point, both sides can mix together the Wi-Fi network password (the so-called Pre-Shared Key or PSK) and the two random blobs of data to generate a one-time key for this session.

This avoids using the PSK directly in encrypting wireless data, and ensures a unique key for each session.

  1. (AP to client) I’m confirming we’ve agreed on enough data to construct a key for this session.
  2. (Client to AP) You’re right, we have.

The KRACK Attacks (with numerous variations) use the fact that although this four-way protocol was shown to be mathematically sound, it could be – and in many cases, was – implemented insecurely.

In particular, an attacker with a rogue access point that pretends to have the same network number (MAC address) as the real one can divert message 4 and prevent it reaching the real AP.

During this hiatus in the handshake, the client may already have started communicating with the AP, because the two sides already have a session key they can use, albeit that they haven’t finalised the handshake.

This means that the client will already be churning out cryptographic material, known as the keystream, to encrypt the data it transmits.

To ensure a keystream that never repeats, the client uses the session key plus a nonce, or “number used once”, to encrypt each network frame; the nonce is incremented after each frame so that the keystream is different each time.

As far as we can determine, all the KRACK attacks involve reused keystream material accessed by “rewinding” crypto settings and thus encrypting different data with the same keystream. If you know one set of data you can figure out the other – that’s the best case; some cases are worse than that because you can as good as take over the connection both ways.

Back to the handshake

At some point, the real AP will send another copy of message 3, possibly several times, until the rogue AP finally lets the message get through to the client.

The mathematical certainty in the protocol now meets cryptographic sloppiness in its implementation.

The client finalises the handshake at last, and resets its keystream by “reinstalling” the session key (thus the name of the attack), and resetting the nonce to what it was immediately after stage 2 of the handshake.

This means the keystream starts repeating itself – and re-using the keystream in a network encryption cipher of this sort is a big no-no.

If you know the contents of the network frames that were encrypted the first time, you can recover the keystream used to encrypt them; if you have the keystream from the first bunch of network frames, you can use it to decrypt the frames encrypted the second time when the keystream gets re-used.

Even if attackers are only able to recover a few frames of the data in any session, they still come out ahead.

Gold dust sounds less valuable than a gold ingot – but if you collect enough gold dust, you get to the same value in the end.

What to do

Changing your Wi-Fi password won’t help: this attack doesn’t recover the password (PSK) itself, but instead allows an attacker to decrypt some of the content of some sessions.

Changing routers probably won’t help either, because there are numerous variants of the KRACK Attacks that affect most Wi-Fi software implementations in most operating systems.

Here’s what you can do:

  • Until further notice, treat all Wi-Fi networks like coffee shops with open wireless, where your network frames are never encrypted.
  • Stick to HTTPS websites, which means the traffic between your browser and the website is encrypted even if it travels over an unencrypted connection.
  • Consider using a VPN, which means that all your network traffic (not just your web browsing) is encrypted, from your laptop or mobile device to your home or work network, even if it travels over an unencrypted connection along the way.
  • Apply KRACK patches for your clients (and access points) as soon as they are available.
  • Sophos Customers should read knowledgebase article 127658.

Simply put, if you ever used open Wi-Fi access points (or Wi-Fi access points where the password is widely known, e.g. printed on the menu or handed out by the barista), you were already living in a world where at least some of your network traffic could be sniffed out at will by anyone.

The precautions that you take in those cases – why not take them all the time?

If you always encrypt everything yourself, in a way that you get to choose and can control, you never have to worry what you might have forgotten about.


Source: Naked Security

 

How the Waltham cyberstalker’s reign of fear was ended

Készült: 2017. október 16
Nyomtatás

The recent arrest and federal charges against a 24-year-old alleged cyberstalker brings into light the terrible fallout from unrelenting online harassment, and highlights that no one is truly anonymous online, not even criminals.

The crime

Arrested on 6 October, Ryan Lin of Newton, Massachusetts allegedly harassed and cyberstalked his former roommate for over a year in a manner so egregious and terrifying that it merited a federal investigation.

The harrowing details of his alleged activities are in a 28-page affidavit, written by FBI Agent Jeffrey Williams, provided by the U.S. Department of Justice—the crux of it is that Lin used email, SMS, social media and phone apps to make life a living hell for his victim; for over a year he harassed her, her roommates, her family and friends, her employers, her landlord and the community she lived in by sending death threats, rape threats, bomb threats and even child pornography.

Lin was a computer science graduate of Rensselaer Polytechnic Institute (RPI), and he had enough cybersecurity knowledge to effectively anonymize himself while he embarked on his campaign of harassment.

Outside of his formal computer science education, Lin had more than a passing understanding of infosec and opsec practices. A quick perusal of one of his active Twitter accounts reveals an interest in the Tor project, Tails (the privacy-centric Linux distribution), major data breaches like Yahoo and Equifax, and the nuances of VPN use.

The affidavit also mentions that Lin had harassed a number of former high school and college classmates. He either impersonated them with fake social media accounts under their names, or he tried to socially engineer his way into their Facebook profiles to harass them directly by creating fake profiles under the name of shared classmates.

The technology

According to the affidavit, Lin used a VPN to cover his tracks while he created the accounts that he used to send his harassing messages. VPNs hide your computer’s IP address and the traffic between you and your VPN provider is encrypted, making it incomprehensible to anyone intercepting it.

VPNs are an important security tool but there’s one major caveat: the encrypted tunnel between you and your VPN provider provides protection against everyone other than your VPN provider, who gets to see everything passing through your network.

There are a dime-a-dozen VPNs out there, including many free ones. Using a shoddy VPN service provided by an untrustworthy company can put your data at more risk than not using one at all. No matter who your VPN provider is though, you should expect them to cooperate with law enforcement if they are subpoenaed to do so.

As Lin himself noted on Twitter just days before he was arrested, a VPN can’t be relied upon to for anonymity:

Something that everyone should know  – VPN provides privacy. TOR provides *decent* anonymity (if you use it correctly) #vpn #tor #broadbandprivacy

It’s interesting that given this knowledge, it seems it was his own VPN traces that ended up being key in his arrest, according to the affidavit.

Another highly portentous tweet was called out in the affidavit:

For example, on June 15, 2017, Lin, using the Twitter handle @ryanlindev, re-tweeted a tweet from “IPVanish,” that read: “Your privacy is our priority. That’s why we have a strict zero log policy.” Lin criticized the tweet, saying, “There is no such thing as VPN that doesn’t keep logs. If they can limit your connections or track bandwidth usage, they keep logs.”

The affidavit details that Lin went through pains to anonymize his traffic by using a mix of proxy servers, several different VPN services and Tor.

In a number of the instances of online harassment under investigation, the user both used a VPN and used an anonymizing service to mask his true IP address. Taking this two-step approach provides the user with another layer of anonymity, and demonstrates an awareness of and concern about the exact issue that Lin highlights in his tweet-the fact that VPN’s track activity with logs.

From the affidavit, it appears that FBI Agent Williams used VPN logs to identify IP addresses that could be traced to Lin’s home and former employer. But that wasn’t a smoking gun, so to speak, just one of many data points used to build the case.

More data points in the case related to email addresses attributed to Lin, which he used to communicate openly with his victim and her roommates. It seems he accessed those emails using the same VPN-assigned IP address that he used to create the email accounts used to harass and threaten his victim.

Lin could face at least five years in prison if he’s convicted.

The impact

I took a special interest in this story as I live in the city that was the target of the frequent shooting and bomb threats: Waltham, a small city of just about 60,000 people.

The bomb threats started in July of this year and were sent to city schools, government offices, libraries, daycare facilities, and even a federal archive building.

In addition to the wide swath of threats, they were also increasing in frequency: there was a time where threats were sent to Waltham schools daily for days and weeks on end—in the span of just a few months the schools received dozens of bomb threats, with 24 threats in just one day.

Aside from the huge impact this made on local police (Waltham is a city of just 60,000 people), the emotional impact on the community can’t be understated.

Each school bomb threat prompted school closures or a complete student evacuation until the schools were swept and deemed safe, and with these threats coming near-daily, scaring many children from going to school, and there were more than a few parents that opted to keep their kids at home from school entirely.

There wasn’t much information that law enforcement could divulge to help calm fears as they were actively pursuing an investigation, and it seemed like there was no end in sight for these terrifying bomb threats as they continued.

Thankfully since the arrest, the bomb threats promptly stopped, and Waltham residents (myself included) are relieved, but also horrified at the nature of what was motivating these threats, unbeknownst to all of us at the time.

I’ll leave you with the words of Harold H. Shaw, Special Agent in Charge of the Federal Bureau of Investigation, Boston Field Division:

As alleged, Mr. Lin orchestrated an extensive, multi-faceted campaign of computer hacking and online harassment that caused a huge amount of angst, alarm, and unnecessary expenditure of limited law enforcement resources

This kind of behavior is not a prank, and it isn’t harmless. He allegedly scared innocent people, and disrupted their daily lives, because he was blinded by his obsession. No one should feel unsafe in their own home, school, or workplace, and the FBI and our law enforcement partners hope today’s arrest will deter others from engaging in similar criminal conduct.


Source: Naked Security

 

Chrome smoked by Edge in browser phishing test

Készült: 2017. október 16
Nyomtatás

At last some good news for Microsoft’s ignored Edge browser: new tests by NSS Labs have found that it beats Chrome and Firefox hands down at blocking malware downloads and phishing attacks.

After 23 days of continuous tests between 23 August and 15 September this year, Edge version 38 blocked 96% of the socially-engineered malware (SEM) samples thrown against it in the form of malicious links and pop-ups, compared to 88% for Chrome version 60 and 70% for Firefox version 55. (The researchers describe SEM attacks as “a dynamic combination of social media, hijacked email accounts, false notification of computer problems, and other deceptions to encourage users to download malware”.)

Edge did even better when it came to phishing, blocking 92% of malicious URLs, compared to Chrome’s 75% and Firefox’s 61%.

NSS also looked at “zero hour” protection, which is how long it takes for each browser to block brand new threats once they’ve been introduced into the test.

For zero-hour SEM, Chrome started at 75% before climbing to a peak of 95% after seven days, while Firefox started at 54%, climbing to a peak rate of only 80% over the same period. Compare that to Edge which managed a steady 99.8% from hour one.

For zero-hour phishing URLs, the results weren’t quite as wide, but even here Edge started at 82% to Chrome’s 59% and Firefox’s 51%. Firefox clawed back some of the gap by day seven, scoring a peak rate of 81% to Chrome’s weakening 65%, but still ended up lagging Edge’s 89%.

These differences sound significant but how seriously should we take them?

There are only two variables here, the first of which is NSS Labs’ test methodology. We’ll ignore that, partly because assessing security testing methodologies could consume an entire article on its own but also because there’s a better candidate – the cloud-based blacklists of files and URLs these browsers use to decide what’s trustworthy and what’s not.

Edge uses Microsoft’s SmartScreen (also used by Internet Explorer), while Chrome and Firefox use Google’s Safe Browsing API (also used by Apple’s Safari, Opera and Vivaldi as well by other Google services such as Gmail).

As far as the NSS tests are concerned, we shouldn’t be surprised that SmartScreen performs better than the Safe Browsing API because that’s been the case ever since the company started testing browser SEM blocking performance some years ago.

We might speculate that Microsoft’s vast Windows base gives it an advantage over Google when it comes to gathering intelligence on malware, although that doesn’t explain why it still beats Google at spotting dodgy URLs which both should, in theory, see equally well.

The difference between Edge and Chrome seems to hold true even when they’re running on other platforms, for example when Windows 10 S (which runs only Windows Store apps) is pitted against the Chromebook, Google’s cloud-oriented computers running Chrome OS.

Here, Edge scored a 92% success rate against phishing URLs while Chrome achieved 75%, both scores identical to the same browsers running on Window 10.

Because they don’t run executables, Chromebooks are undoubtedly superior to Windows computers against SEM malware but when it comes to URL detection, these tests suggest they lag.

An interesting question is what all this means for companies using more than one browser, either for compatibility reasons (i.e. older versions of Internet Explorer) or because they fear being exposed to a specific security vulnerability affecting one.

That’s a complex judgment not assessed by NSS Labs but it shouldn’t escape our notice that Edge came last in the CanSecWest Pwn2Own contest earlier this year in terms of contestants finding exploitable software flaws.

These phishing and SEM tests are not the whole story.

In the end, focussing on browser security technology might be to miss the point that devices of all kinds come with other security layers, chief among them their users.

Which is to say that while the person using a computer can be a weakness, they could, if properly trained, also be a strength. Whatever the differences between one browser and another, performance scores should never be seen as compensation for more fundamental weaknesses.


Source: Naked Security

 

Hackers steal restricted information on F-35 fighter, JDAM, P-8 and C-130

Készült: 2017. október 13
Nyomtatás

Add the Australian Signals Directorate (ASD) to the already long list of organizations compromised by the security weaknesses of third-party contractors.

But in this case it wasn’t just credit card and other consumer data compromised. It was detailed information on some of the nation’s major military defence systems – aircraft, bombs and naval vessels.

The first mention of the breach came almost in passing and with few details, deep in the Australian Cyber Security Centre (ACSC) 2017 Threat Report. It said that almost a year ago, in November 2016, the ACSC:

…became aware that a malicious cyber adversary had successfully compromised the network of a small Australian company with contracting links to national security projects. ACSC analysis confirmed that the adversary had sustained access to the network for an extended period of time and had stolen a significant amount of data.

The report didn’t name the company, its size or what kind of national security work it did.

Turns out it should have been obvious that the company – a 50-person aerospace engineering firm with only one person handling all IT-related functions – was an obviously weak link in the security chain.

That and quite a bit more detail – although the company still remained unnamed – came earlier this week, from Mitchell Clarke, incident response manager at the ASD, in a presentation at the national conference of the Australian Information Security Association (AISA) in Sydney.

According to ZDNet correspondent Stilgherrian, who obtained an audio of the presentation, Clarke said the attacker(s), who had been inside the company’s network at least since the previous July, had “full and unfettered access” for several months, and exfiltrated about 30GB of data including, “restricted technical information on the F-35 Joint Strike Fighter, the P-8 Poseidon maritime patrol aircraft, the C-130 transport aircraft, the Joint Direct Attack Munition (JDAM) smart bomb kit, and a few Australian naval vessels.”

He said the attackers, who used a tool called China Chopper, could have been state sponsored or a criminal gang.

And they likely had little trouble gaining access.

Clarke, who named the advanced persistent threat (APT) actor “APT ALF” after a character in an Australian television soap opera Home and Away, said besides the single IT employee, who had only been on the job for nine months, the “mum and dad-type business” had major weaknesses:

There was no protective DMZ network, no regular patching regime, and a common Local Administrator account password on all servers. Hosts had many internet-facing services.

Access was initially gained by exploiting a 12-month-old vulnerability in the company’s IT Helpdesk Portal, which was mounting the company’s file server using the Domain Administrator account. Lateral movement using those same credentials eventually gave the attacker access to the domain controller and the remote desktop server, and to email and other sensitive information.

Beyond that, Clark said the firm’s Internet-facing services still had their default passwords of admin and guest. He called the months between when the hackers gained access and their intrusion was discovered, “Alf’s Mystery Happy Fun Time.”

The Age reported that a spokesperson for ACSC said while the data was “commercially sensitive,” it was not classified.

But Clarke said among the stolen documents was one that, “was like a Y-diagram of one of the Navy’s new ships and you could zoom in down the captain’s chair and see that it’s one metre away from the nav (navigation) chair and that sort of thing.”

Whatever the sensitivity of the data, it seems certain that the breached firm wasn’t following what the ASD calls the “Essential Eight Strategies to Mitigate Targeted Cyber Intrusions.”

The agency said while no strategy is guaranteed to prevent cyber intrusions, simply implementing the “Top 4” would block 85% of adversary techniques. They amount to what most security experts, and regular readers of Naked Security, will recognise as basic security hygiene:

  1. Use application allow lists so only run approved programs
  2. Patch applications like Flash, web browsers, Microsoft Office, Java and PDF viewers
  3. Patch operating systems
  4. Restrict admin privileges based on user duties

According to ASD, those strategies have been mandatory for all Australian government organizations since 2013.

Source: Naked Security

 

1. oldal / 729

<< Első < Előző 1 2 3 4 5 6 7 8 9 10 Következő > Utolsó >>

Hozzászólások

Hacktivity 2014

Események

Nincs esemény létrehozva még.

mySec talk #7 (ITBN)

Biztonsági rés az Instagramban

Egy spanyol biztonsági szakértő fedezte fel az Instagram alkalmazás sérülékenységét, amely módot ad a támadóknak arr

Facebook követés mobil alkalmazásokra

A Facebook bejelentette, hogy elérhetővé teszik a „Követés” funkciót, amely révén követhetjük azok tevékenységé

Az NVIDIA-t utálók törték fel a vállalat honlapjait

A biztonsági incidens meglehetősen komoly, de a cég igyekszik mindenkit megnyugtatni.

Feltörték a Yahoo! Voice-t, 450 000 ember adatait lopták el

Minden hétre jut valamilyen adatszivárgásos hír – most a Yahoo! volt az áldozat. A Yahoo! hazánkban már annyira nem néps

Jutalmaz az Apple! Nem..

Kifizetne bárki is 6,9 eurót az Apple Store-ban egy 77 euró értékű ajándékkártyáért? Ha igen, azért óvatosságra

Alig egy hét múlva Hacktivity

Október 21–22-én, immáron 13. alkalommal gyűlnek össze az etikus

Az Index.hu újságírója lett az „Év információbiztonsági újságírója”

A Hétpecsét Információbiztonsági Egyesület 2006-ban alap&iac

Boldog Új Évet Kívánunk! - 2016.

Az Antivirus.blog nevében Minden Kedves Olvasónknak Egészségben, siker

Ez történt 2015-ben

Nem volt eseménytelen esztendő a 2015. - sem. A legizgalmasabb incidense

mySec Információ

Cron Job Starts