Aadhaar breaches fuelled by rogue admin accounts

Készült: 2018. január 09
Nyomtatás

Not long ago trumpeted as the world’s largest biometric database, India’s Aadhaar system covers 1.2bn citizens. Lately, though, it’s acquired a less impressive reputation – that it’s one of the easiest to breach.

In a matter of days, two sets of journalists claimed they’ve bypassed its security with worrying ease, apparently by gaining access to a layer of privileged and admin accounts that have ended up in the wrong hands.

In the most widely-reported incident, a researcher paid Rs 500 ($8) to an anonymous WhatsApp seller for credentials giving access to the name, address, phone number, postal PIN, email address and photograph of anyone in Aadhaar after entering their 12-digit UIDAI (Unique Identification Authority of India) number.

Worse, for a few dollars extra, the researcher was offered software capable of printing this out as a usable Aadhar identity card.

A day later and a second investigation reported being able to acquire access to an admin account for between Rs500 and 6,000 ($95) that conferred the Godlike ability to additional new admins accounts, which in turn could create new admin accounts – and so on.

Which meant:

Once you are an admin, you can make ANYONE YOU CHOOSE an admin of the portal. You could be an Indian, you could be a foreign national, none of it matters – the Aadhaar database won’t ask.

The revelations continued this week with the Times of India reporting that despite November reports that up to 200 Indian government websites were displaying details of Aadhaar identities in public, some continued to do so weeks later.

None of this is good news for Aadhaar’s reputation of course, but the biggest worry could turn out to be the authorities’ confused response.

When confronted with the fact that admin accounts were being traded, one UIDAI regional official seemed shocked:

No third person in Punjab should have a login access to our official portal. Anyone else having access is illegal, and is a major national security breach.

And yet, an official UIDAI statement made to news site Buzzfeed more resembled an angry denial than an admission of problems that need to be fixed:

Claims of bypassing or duping the Aadhaar enrolment system are totally unfounded. Aadhaar data is fully safe and secure and has robust, uncompromised security.

None of Aadhaar’s biometric data was compromised, the source added, while appearing to suggest that criminal charges might be filed against journalists for unauthorised access.

It’s not clear from local media reports how serious this threat is, but if it is it would be deeply counter-productive. If the system has weaknesses, one way they will be uncovered is by researchers and journalists reporting on them.

Indians don’t officially have to register with Aadhaar but can’t access government services without being part of the system. Take up has been hugely successful, reportedly enrolling 99% of Indians over the age of 18.

Not surpringly, successive governments have become heavily invested in its fate and predictably sensitive to reports of security failures which might reflect badly on them.

This is one reason why critics think massive government-backed identity databases carry huge risks. When a private company suffers a breach, in principle it can be held to account by regulators and the force of law. If the same happens to a government-administered database, blame might be temptingly easy to ignore, cover up or shift to junior levels.

It’s too early to declare Aadhaar a broken system but neither, so far, is it exactly proving the pessimists’ predictions wrong.


Source: Naked Security

 

Apple issues Spectre fix with iOS 11.2.2 update

Készült: 2018. január 09
Nyomtatás

On 8 January, Apple made available iOS 11.2.2, which includes a security update for Spectre, one of the CPU-level vulnerabilities making the headlines of late. (If you need a full rundown about what these processor bugs entail and how they work, take a moment to read Paul Ducklin’s comprehensive post on the topic.)

This iOS update specifically addresses CVE-2017-5753 and CVE-2017-5715, two chip-level vulnerabilities collectively known as Spectre. All of the chip-level vulnerabilities including Spectre, at a very high level, take advantage of flaws in hardware to allow an attacker to potentially read or steal data.

Thankfully, these flaws can be mitigated at an operating system or software level when vendors make patches available. The two Spectre vulnerabilities can be triggered via Javascript running in a web browser, so the iOS 11.2.2 update specifically makes changes to Apple’s Safari and WebKit to mitigate their effects.

There were a number of chip vulnerabilities revealed concurrently earlier this month – they’re similar but not the same. Often mentioned in the same breath as Spectre is Meltdown, CVE-2017-5754. While Meltdown affects most types of Intel processors made since 1995 – meaning almost all the world’s desktops, laptops, and servers – Spectre affects an even broader array of processor types, not just Intel, but AMD and ARM as well.

Most of the world’s smartphones, including iPhones and Samsung phones, run on ARM chips. While yes, technically, Spectre makes most of us with a smartphone in our hands vulnerable, thankfully the Spectre flaws have been found by vendors and researchers to be much harder to exploit overall than Meltdown, so it hasn’t been as high a priority for a fix.

So if we got a Spectre patch yesterday and Spectre’s a lower priority, where is the fix for Meltdown? After all, Meltdown is not mitigated by this iOS patch. That’s because Apple already released an update to mitigate Meltdown: The Meltdown fix was in the iOS 11.2 update back in December, though we didn’t know it at the time. (If you check the iOS 11.2 patch notes, you’ll see that the full details on the Kernel-level update, and the CVE addressed, were only added on 4 January.)

In fact, the vast majority of us didn’t know about Meltdown’s existence until January. However, according to the official Meltdown research paper, the researchers who discovered Meltdown were able to effectively work within a responsible disclosure period with vendors to get patches out for OSX, Windows and Linux prior to public disclosure. So kudos to all involved there and hooray for coordinated disclosure.

If you’re an iOS user on iPhone or iPad, this iOS 11.2.2 update should already be available to you to download and install – as always, we recommend you patch as soon as you can. Hopefully you’ve already applied the December iOS 11.2 update to get the fix for Meltdown!

(Are you a Google Android user wondering where your update is? Google issued a patch for you back on 5 January for the two Spectre vulns and the Meltdown vulnerability.)


Source: Naked Security

 

Spyware user tracked boyfriend to have him killed by hitman

Készült: 2018. január 09
Nyomtatás

Stop me if you’ve heard this one:

Boy meets girl. Girl tracks boy with spyware. Girl (allegedly) hires hitman to kill boy. Girl arrested by hitman, who actually works for the FBI.

Wait a minute. What’s that you say? It’s not an elevator pitch for a thriller? It actually happened?!

It sure did. Unfortunately, it’s not humorous, either, given that a man allegedly could have been murdered.

The story involves a Los Angeles woman who goes by the handle “Mz. Fiesty” on social media.

According to the US Attorney’s Office for the Central District of California, Rasheeda Johnson Turner, 37, was arrested last month on federal charges that she hired a hitman-slash-FBI informant to kill her boyfriend so she could get her hands on his life insurance payout.

The boyfriend/would-be victim is identified in court documents as L.G.

Turner allegedly told the informant she was the beneficiary of a $150,000 life insurance policy and that she would pay the killer $50,000. Over the course of two weeks, she allegedly told the purported hitman that she originally planned to do the deed herself and had sourced “pure acid” from a plumber to get it done.

According to the criminal complaint, Turner initially tried to hire a hitman in November, but he wasn’t interested in the job. The FBI got wind of the alleged plot and managed to get an informant introduced to Turner. Turner, also known as Feisty or Mz. Feisty, is, according to her social media posts, an amateur film star with a rap sheet: she was convicted in 2005 for forgery and theft and arrested in 2016 for spousal battery, having allegedly assaulted L.G.

The informant/”hitman” agreed to meet with Turner on 4 December. Before the meeting, he got rigged with a wire tap to record audio and video. According to the complaint, Turner was recorded as saying that being a mom got in the way of being a murderer herself:

I was gonna off blood, myself, but it’s hard because I got a kid.

Turner actually rented a room to kill L.G., the complaint alleges, but she called it off since she was afraid her daughter would interrupt.

So she allegedly decided instead to hire a professional and pay him out of the life-insurance money:

Once he is dead, I get the death certificate, then they pay me, what? Within thirty days, the life insurance or whatever, and I said I cash the money out or whatever.

OK, how do you want it done?, the hitman wanted to know.

Doesn’t matter, she allegedly said, as long as his phone disappears:

I just want him dead and his phone gone because, you know, we be texting back and forth.

She allegedly offered to pay the informant a third of the insurance money: $50,000. Then, she showed him a photo of L.G. and told him that the victim sleeps in his car – a Lexus – at night. She also allegedly showed the informant a tracking app on her phone that allows her to locate the victim on a map.

I can tell you when he over there. I can hit you from my other number, and be like O.K. Yeah, I’ll do that.

On 7 December, Turner was reportedly ready for L.G. to exit the world. What she allegedly texted to the fake hitman:

That fly needs to be swatted.

The next day, Turner allegedly told the informant that it had to be done soon, since L.G. was getting close to a new woman, and she was afraid she’d get yanked off his bank accounts and life insurance policy.

I’m like, oh no, we gotta get it done ASAP so we can still get that f**kin’ money.

Then, she took the informant on a tour of the places where L.G. tended to sleep in his car. Turner told him she wanted the victim killed the next week. When it was done, she told the hitman, she wanted him to let her know by using the code “Operation Dumbo.” After she got that code, she’d remove the tracking app from her phone, she said.

Turner allegedly said she’d pay the informant part of the money upfront – as soon as she got it from a credit card scam.

How did she get this good at getting away with murder? From TV, she allegedly told the informant.

You gotta beat them at they own game. I watch all that killer shows, so it tells you how to get away with sh*t. It tells you what to do.

Turner was arrested on 13 December and charged with murder-for-hire. She was due to be arraigned on 4 January.


Source: Naked Security

 

Facebook bug could have exposed your phone number to marketers

Készült: 2018. január 09
Nyomtatás

You know that Facebook data-use policy, the one that promises it’s not going to spread our personal information to outfits that want to slice and dice and analyze us into chop suey and market us into tomato paste?

We do not share information that personally identifies you (personally identifiable information is information like name or email address that can by itself be used to contact you or identifies who you are) with advertising, measurement or analytics partners unless you give us permission.

Yea, well… funny thing about that…

Turns out that up until a few weeks ago, against its own policy, Facebook’s self-service ad-targeting tools could have squeezed users’ cellphone numbers from their email addresses… albeit very, verrrrry sloooowly. The same bug could have also been used to collect phone numbers for Facebook users who visited a particular webpage.

Finding the bug earned a group of researchers from the US, France and Germany a bug bounty of $5000. They reported the problem at the end of May, and Facebook sewed up the hole on 22 December.

That means that phone numbers could have been accessed for at least seven months, although Facebook says that there’s no evidence that it happened.

The researchers described in a paper how they used one of Facebook’s self-serve ad-targeting tools called Custom Audiences to ascertain people’s phone numbers.

That tool lets advertisers upload lists of customer data, such as email addresses and phone numbers. It takes about 30 minutes for the tool to compare an advertiser’s uploaded customer list to Facebook’s user data, and then presto: the advertisers can target-market Facebook users whose personal data they already have.

Custom Audiences also throws in other useful information: it tells advertisers how many of its users will see an ad targeted to a given list, and in the case of multiple targeted-ad lists, it tells advertisers how much the lists overlap.

And that’s where the bug lies. Until Facebook fixed it last month, the data on audience size and overlap could be exploited to reveal data about Facebook users that was never meant to be offered up. The hole has to do with how Facebook rounded up the figures to obscure exactly how many users were in various audiences.

As far as resources go, the initial exploitation is the most “expensive” aspect of the exploit, the researchers said. In one evaluation of the attack, they recruited 22 volunteers with Facebook accounts who lived either in Boston or in France.

It took 30 minutes to upload two area code lists for Boston (617 and 857) where the phones had 7 digits to infer. Each list had one million phone numbers, all with a single digit in common. France was even tougher to chew through: it took a week to generate 200 million possible phone numbers starting with 6 or 7 and to upload each list.

But after that, it was fairly smooth sailing.

The resulting audiences can be re-used to infer the phone number of any user.

The researchers went on to use Facebook’s tools to repeatedly compare those audience lists against others generated using the targets’ emails. They kept an eye out for changes to the estimated audience figures that occurred when an email address matched a phone number, revealing users’ numbers drip by drip, one digit at a time.

The attack apparently worked with all Facebook users who had a phone number associated with their account. The exploit stumbled when people provided multiple, or no, phone numbers for their Facebook accounts. It took under 20 minutes per user to get phone numbers.

The researchers used the same technique to collect phone numbers en masse for volunteers who visited a website with the “tracking pixel” Facebook provides to help site operators target ads to visitors. As they explain, Facebook gives advertisers some code – referred to as a tracking pixel, since it was historically implemented as a one-pixel image – to include on their websites. When users visit the advertiser’s website, the code makes requests to Facebook, thereby adding the user to an audience.

The audiences aren’t defined by “attributes,” such as visitors’ gender or their location. Rather, these are “PII-based audiences.” Advertisers select specific users they want to target, by either uploading known email addresses, names, or other personally identifying information (PII), or by selecting users who visited an external website that’s under the advertiser’s control.

The tracking-pixel version of the exploit succeeded in getting the researchers the phone numbers they were after. It appeared to work for all accounts Facebook defines as daily active users.

Facebook fixed the bug by weakening its ad-targeting tools. They’re not showing audience sizes any longer when customer data is used to make new ad-targeting lists.

Facebook Vice President for Ads Rob Goldman put out a thank-you statement for the researchers’ find:

We’re grateful to the researcher who brought this to our attention through our bug bounty program. While we haven’t seen any abuse of this complex technique, we’ve made product changes to prevent this from occurring.


Source: Naked Security

 

Facebook needs fixing, says Mark Zuckerberg

Készült: 2018. január 08
Nyomtatás

Mark Zuckerberg, the wizard who pulls the levers behind the Facebook curtain, has set himself a doozy of a challenge for 2018: to fix Facebook.

The most pressing problems, he said in a post on Thursday, are protecting the Facebook community from abuse and hate, stopping nation states from using Facebook like a hacky-sack in other countries’ elections, and making sure that all of us dopamine-addicted users spend our time on the platform productively (instead of turning into passive, miserable, Facebook-fixated couch potatoes).

The Facebook CEO has done these personal challenges since 2009, when he decided to dress like a grown-up and wear a tie every day:

That first year the economy was in a deep recession and Facebook was not yet profitable. We needed to get serious about making sure Facebook had a sustainable business model. It was a serious year, and I wore a tie every day as a reminder.

His list after 2009:

  • 2010: Learn Mandarin
  • 2011: Only eat meat he had killed himself
  • 2013: Meet one person a day outside Facebook
  • 2015: Read a book every other week
  • 2016: Build a simple AI to run his home

He says that current days feel dire in a similar way to the wear-a-tie, nonprofitable, recession-depressed first year he made a personal challenge:

The world feels anxious and divided, and Facebook has a lot of work to do – whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent.

My personal challenge for 2018 is to focus on fixing these important issues. We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools. If we’re successful this year then we’ll end 2018 on a much better trajectory.

Commenters on his post include people who found Zuckerberg’s goal admirable, as well as those who grumbled about the downsides of the platform.

Facebook itself recently confronted the existential question of whether social media can be bad for us. Last month, Facebook publicly recognized some of its platform’s detrimental effects but suggested the cure is to engage with the platform more: more messages, more comments and more posts. The idea, it said at the time, was to actively engage with friends, relatives, classmates and colleagues, rather than passively consuming content.

A study we conducted with Robert Kraut at Carnegie Mellon University found that people who sent or received more messages, comments and Timeline posts reported improvements in social support, depression and loneliness. The positive effects were even stronger when people talked with their close friends online. Simply broadcasting status updates wasn’t enough; people had to interact one-on-one with others in their network.

Research has showed positives coming out of social media, including self-affirmation coming from reminiscing on past meaningful interactions – for example, seeing photos users had been tagged in and comments left by friends. But social media also has a darker side: social media-enabled trolling that can lead to problems as severe as suicide.

There have been many studies that have looked at this dark side of Facebook. Five themes emerged from one such: managing inappropriate or annoying content, being tethered to Facebook, perceived lack of privacy and control, social comparison and jealousy, and relationship tension.

Facebook’s acknowledgement that social media can be bad for us came after months of soul-searching, and a good deal of regret, from the very people who built Facebook. For example, former Facebook vice-president of user growth Chamath Palihapitiya last month gave a scathing speech about the corporation, saying that he regrets his part in building tools that destroy “the social fabric of how society works.”

The month before, Facebook ex-president Sean Parker admitted that Facebook creators were from the start well aware that they were exploiting a “vulnerability in human psychology” to get people addicted to the “little dopamine hit” when someone likes or comments on your page.

Other ex-Facebookers who’ve lately stepped back to question the repercussions of what they’ve created include Facebook “like” button co-creator Justin Rosenstein and former Facebook product manager Leah Pearlman, who have both implemented measures to curb their social media dependence.

It’s easy enough to point out where Facebook needs fixing. It’s tougher to come up with ways to fix the vast problems Zuckerberg has outlined – something he noted himself in his post:

These issues touch on questions of history, civics, political philosophy, media, government, and of course technology. I’m looking forward to bringing groups of experts together to discuss and help work through these topics.

He pointed to one example as that of the centralization of power in technology, which is the opposite of what many set out to do when building the internet we now have:

A lot of us got into technology because we believe it can be a decentralizing force that puts more power in people’s hands. (The first four words of Facebook’s mission have always been “give people the power”.) Back in the 1990s and 2000s, most people believed technology would be a decentralizing force.

But today, many people have lost faith in that promise. With the rise of a small number of big tech companies – and governments using technology to watch their citizens – many people now believe technology only centralizes power rather than decentralizes it.

Pushing against such trends are the rise of encryption and cryptocurrency, Zuckerberg says. Such technologies take power away from centralized systems and “put it back into people’s hands,” he said.

But without regulation, those hands can prove to be buttery, he said. Cryptocurrencies can go up in a puff of smoke, for example, leaving little recourse to those with emptied wallets.

Zuckerberg said fine: let’s “go deeper” and figure out how to make these things work for us:

[Decentralized technologies] come with the risk of being harder to control. I’m interested to go deeper and study the positive and negative aspects of these technologies, and how best to use them in our services.


Source: Naked Security

 

Star Wars: The Last Jedi – the security review

Készült: 2018. január 08
Nyomtatás

Last week I went to “go see a Star War,” and the Naked Security team asked me to write about it…

Trekkie though I am, I’ll try to put my franchise allegiance to one side for this piece and take an objective look at the security angles in Star Wars: The Last Jedi. And yes, there actually is something to discuss here. It’s all at a very generalized level of course – I don’t think we’ll ever see the day when we’ll watch Kylo Ren loading up Kali Linux – so take this with many grains of salt.

Akin to my Mr. Robot reviews, I’m not going to review the whole movie, just the security bits (you’re on NakedSecurity, after all) – and yes, there will be spoilers!

WARNING: SPOILERS AHEAD – SCROLL DOWN TO READ ON

 

 

Opening scene red teaming (just never mind how it ends)

When I sat down to watch this movie, I wasn’t sure if there’d be anything for me to write about. Security? In Star Wars? Finding an apparently-put-there-on-purpose vulnerability in the giant Death Star and exploiting it with lasers, okay sure. But that’s been done… a long time ago in a galaxy far far away. (Sorry.) Thankfully the very first scene of The Last Jedi is, in a weird way, such a great advertisement for red teaming that I fully expect to see it included in future job descriptions. Never mind that it has sad, catastrophic consequences! That’s a big thing to disregard, I know, but bear with me.

We have Poe, being a hotshot, distracting the First Order by being as conspicuous as possible while trying to do something much more underhanded. Reminiscent of every VoiP conference call ever, the communication line cuts out and nobody knows what anyone’s actually saying. What’s the harm, right? He’s just one guy after all. Of course, the bad guys eventually realize they’ve been had and that they should have shot Poe down five minutes ago. Hijinks and plot developments ensue.

This whole scene reminded me so much of the war stories I’ve heard exchanged by pen testers over the years. These are red team professionals that are hired by companies to expose their weaknesses, and we’re not just talking software. They use a vast arsenal of social engineering methods to gain entry into offices or get employees to compromise their company’s security, sometimes by pretending to be someone they’re not, sometimes by creating confusion and taking advantage of the chaos. Usually by the time a pen tester is discovered, they’ve already got the information or hit the target they needed to complete their engagement.

Of course, the massive difference between what we see in the movie’s opening scene and what pen testers do professionally is that pen testers are hired by the organization they’re infiltrating so the company can find out where their weaknesses are and work to address them. The First Order did not hire Poe… as far as we know anyway – now that would be a massive plot twist. But from the outside looking in, when a pen tester is trying to infiltrate their target and not trying to be particularly subtle about it, the interaction might look just a little bit like this scene.

Infosec didn’t invent this kind of thing of course; subterfuge has been going on as long as there have been spies and soldiers, which is to say, since forever. Never mind that the end result of this particular “engagement” is disastrous for the Resistance, and not so great for Poe either really – but you can’t win them all. Still, taken out of context, if I was looking for a quick and easy allegory to show what it looks like when a pen tester is at work, this wouldn’t be a terrible clip to call on.

DJ, the Greyhat

At one point in the film, there’s a whole side plot introduced about the need to crack the encryption of something-or-other, requiring the services of one master codebreaker named DJ. The encryption of the something-or-other doesn’t really matter here (arguably it’s a completely unnecessary plotline anyway), but DJ is worth a mention.

Firstly, let’s take a look at how codebreaker DJ is introduced. We find out he likes to gamble at casinos, and can be frequently found in, and I quote here: “A terrible place filled with the worst people in the galaxy.” Basically, it’s space-Vegas. If they had shown DJ at a hacker conference and not merely in a casino, I’d be writing about Defcon Star Wars. (Rose nailed it when she said “I wish I could put my fist through this whole lousy beautiful town,” I think she speaks for many of us who make the trek to Vegas every year for “hacker summer camp.”)

As we get to know DJ through his actions, we see that he’s amazingly resourceful – of course he knows how to lockpick! – and he knows how to use seemingly innocent things for unusual purposes, like using Rose’s pendant as a conductor.

Like any good hacker, he has a considerable skillset that can be used for good or evil, and DJ has no qualms about working for either “side” depending on who’s paying. In modern parlance, you could call DJ a greyhat: He’s up for working with “good guys,” but in the end his motivation is cash and not some moral high ground. (This does become a bit of a semantic argument about how you define blackhat hacking: You could certainly argue that if you’re not explicitly working for “good,” there’s no grey there and you’re a blackhat. But only siths deal in absolutes, right?)

When DJ, Rose and Finn get caught by the First Order, DJ doesn’t hesitate to cut a deal in return for clemency – not unlike criminal hackers who get caught by law enforcement and then make a career out of educating the feds. This phenomenon happens enough in the computer security world that there are even memes about it:

I’m just glad we didn’t see DJ in a black hoodie, otherwise I’d be getting Mr. Robot flashbacks mixed up in my Star Wars and I’m a confused enough Trekkie as it is.

One of the best lines in the movie, predictably, came from Yoda: “The greatest teacher, failure is.” I’ll be damned if I don’t see that on a slide deck at a conference within the next year.

What did you think? Did The Last Jedi live up to the hype for you? And are there any other security angles I may have missed? Let me know in the comments below.


Source: Naked Security

 

Ex-NSA hacker builds AI tool to hunt hate groups’ symbols online

Készült: 2018. január 08
Nyomtatás

Emily Crose, ex-hacker for the National Security Agency (NSA), ex-Reddit moderator and current network threat hunter at a cybersecurity startup, wanted to be in Charlottesville, Virginia, to join in the protest against white supremacists in August.

Three people died in that protest. One of Crose’s friends was attacked and hurt by a neo-Nazi.

As Motherboard’s Lorenzo Franceschi-Bicchierai tells it, Crose was horrified by the violence of the event. But she was also inspired by her friend’s courage.

Her response has been to create and train an Artificial Intelligence (AI) tool to unmask hate groups online, be they on Twitter, Reddit, or Facebook, by using object recognition to automatically spot the symbols used by white nationalists.

The images her tool automatically seeks out are so-called dog whistles, be they the Black Sun (also known as the “Schwartze Sonne,” an image based on an ancient sun wheel artifact created by pagan German and Norse tribes that was later adopted by the Nazi SS and which has been incorporated into neo-Nazi logos) or alt-right doctored Pepe the frog memes.

Crose dubbed the AI tool NEMESIS. She says the name is that of the Greek goddess of retribution against those who succumb to arrogance against the gods:

Take that to mean whatever you will, but you have to admit that it sounds pretty cool.

Crose says it’s just a proof of concept at this point …

 

Microsoft could soon be “password free”

Készült: 2018. január 05
Nyomtatás

As each New Year rolls by, someone somewhere usually predicts the death of passwords as a trend for the coming months.

Every year so far, they’ve been proved wrong – somehow passwords cling on despite an exhausting list of maladies, mostly to do with how easy they are to forget, steal and misuse.

The moral would seem to be never to listen to predictions about passwords. However, post-Christmas comments by Microsoft chief information security officer Bret Arsenault offer a small but tantalising sign that the password age might finally be nearing its end.

The evidence is usage figures for Windows Hello, the company’s technology for authenticating Windows users using facial recognition.

Launched in 2015 as part of Windows 10, Arsenault said that Hello was now the default way for the company’s 125,000 employees to log into computers.

The majority of Microsoft employees already log in to their computers using Windows Hello for Business instead of passwords. Very soon we expect all of our employees will be able to go completely password free.

No surprise that Microsoft might champion its own security technology, but Arsenault goes on to make an argument for replacing passwords that will strike a chord among professionals who manage credentials.

For several decades, the industry has focused on securing devices […] but it’s not enough. We should also be focused on securing individuals. We can enhance your experience and security by letting you become the password.

Whatever one thinks of Windows Hello, or biometrics in general, his observation sounds fair.

Passwords were created for a world of devices and systems, not one in which the need to verify a person’s identity in real time using something more substantial than a string of characters has become pressing.

One view is that multi-factor authentication (MFA) does this without the need to abolish passwords completely but the counter argument is that leaving passwords in place is both unnecessary, complicated and needlessly insecure.

Better the clean break with the past. As Microsoft says in its Hello marketing spiel – “you are the password.”

A caution is that while facial ID systems abolish passwords – unique data hopefully known only to the user – they don’t abolish the fact that discrete data must ultimately underpin this.

In the case of Hello, that’s biometric data, which has to be stored somewhere, which Microsoft recently made clear should be inside a Trusted Platform Module (TPM) chip.

As November’s scare over Infineon TPMs reminded us, these are not invulnerable. Changing a compromised password is hard enough but doing the same for a lost face, finger or voice print might be impossible.

Nor, ironically, has Hello itself been immune from security worries, such as the recent research that found that it could be spoofed by nothing more complicated than a specially-made infra-red photograph of the account holder.

Ironically, the research served to underline how hard it would be to defeat Hello under real-world conditions.

Getting hold of a high-definition IR photograph of an account holder wouldn’t be trivial, while some of the technical weakness revealed by the attack were connected to the immaturity of the camera hardware Hello needs for facial recognition (some don’t support Hello’s advanced anti-spoofing).

It could be the cost and maturity of facial recognition cameras that presents the biggest barrier to Hello, not a reluctance to let go of passwords.

As Microsoft notes:

Already, roughly 70 percent of Windows 10 users with biometric-enabled devices are choosing Windows Hello over traditional passwords.

Which perhaps begs the question of why 30% of users who’ve invested in a camera aren’t using it with Hello.

Perhaps what will unshakle users from passwords will be a patchwork of biometric systems (see Apple’s Face ID as a leading contender), of which Hello will only be one. However much security this claims to add, it won’t necessarily be simpler or cheaper for users.

Will anyone miss passwords when they eventually disappear? That seems unlikely, but at that probably far off moment there will be plenty of people feeling very nostalgic for the simpler world they served.


Source: Naked Security

 

JPMorgan doesn’t trust YouTube to keep its ads out of sketchy channels

Készült: 2018. január 05
Nyomtatás

Last March, Google found itself apologizing to many of its YouTube advertisers.

It was apologizing to their backs. They were running for the hills. Brands such as Marks & Spencer, McDonald’s, L’Oreal, Audi, Tesco and the BBC pulled ads that had wound up running alongside videos from rape apologists, anti-Semites, hate preachers and IS extremists.

The most recent YouTube ad scandal landed in November, when an investigation by the BBC found that a glitch in YouTube’s tool for tracking obscene comments on kids’ videos meant the tool hadn’t been working right for over a year. Meanwhile, an investigation by The Times found that YouTube ads were funding the habits of perverts.

Google’s response: sorry, we’ll do better!

Eight months later, the response from the advertisers: You’re not doing enough, and you’re not doing it fast enough.

Speaking at London’s Advertising Week Europe in March, Google’s European chief Matt Brittin said that the company was looking to give advertisers easier control over where their ads appear, that 98% of flagged YouTube content was being examined within 24 hours, and that it could, and would, do even better. However, observers noted that Brittin didn’t say anything about devoting staff to proactively seek out inappropriate content instead of just jumping on it after users had already seen and flagged it.

Since then, Google has announced other fixes, such as restricting ads only to creators and channels with 10,000 views and hiring larger numbers of people to monitor unsuitable videos, among others.

Sorry, that doesn’t cut it, say some advertisers, including JPMorgan Chase. The bank pulled its ads in March, got sick and tired of waiting for Google to fix the mess, and finally said, Forget it: we’ll fix this ourselves.

The result, as reported by Business Insider UK, is a proprietary algorithm the bank built that’s designed to select allowlisted, “safe” channels to run ads on.

Out of more than 5 million YouTube channels, JPMorgan Chase winnowed the list down to 3,000 YouTube channels on which it can countenance having its ads appear.

The bank’s algorithm plugs into YouTube’s application programming interface (API) to select safe channels. It was built by the company’s internal programmers and media-buying teams.

As Business Insider describes it, there are 17 layers or filters involved.

One of the filters, for example, looks at the total video count on a channel, which automatically sifts out channels with one-off viral videos. The bank also looks at channels’ subscriber counts, the general topics channels focus on, language, and even the comments on different channels’ videos.

The allowlisting began in March, when JPMorgan Chase culled the pre-approved list of sites to run ads on from 400,000 down to 5,000. Currently, it reportedly runs ads on 10,000.

The bank started working on the YouTube algorithm in August and rolled it out in October. And, it’s claiming a success rate of 99.9%. JPMorgan is still conducting manual checks on those channels and tweaking the tool to ensure it’s foolproof.

Business Insider quotes Aaron Smolick, executive director of paid-media analytics and optimization at JPMorgan Chase, who said that Google’s method of monetizing YouTube may work fine for Google, but it isn’t working for his company:

The attention of protecting a brand has to fall on the actual people within the brand itself.

That’s a proactive approach to dealing with Google’s YouTube mess. But some advertisers have chosen instead to get off YouTube and stay off until Google manages to keep their ads away from content promoting terrorism and hate.

It hasn’t happened yet. Speaking at Business Insider’s IGNITION Conference in November, AT&T chief brand officer Fiona Carter confirmed that the company still hasn’t returned to YouTube. Other advertisers keeping their distance include Priceline, Kimberly-Clark, Squarespace and Casper, according to data from ad analytics platform MediaRadar cited by Business Insider.


Source: Naked Security

 

Children at ‘significant’ social media risk

Készült: 2018. január 05
Nyomtatás

Slime.

It’s the most beautiful, satisfying, relaxing thing I’ve ever seen, and it proves that children are geniuses, because they’re smart enough to make it and smart enough to watch online slime videos.

Says 11-year-old Alina:

If you’re like really stressed or something and you watch a really satisfying slime video it makes you like calmer.

So that’s one of many plus sides of how kids – the under-13 crowd – are using social media. They say it takes their minds off things, too: “If you’re in a bad mood at home you go on social media and you laugh and then you feel better,” says 10-year-old Kam.

But according to a Children’s Commissioner report that looked at social media use among 8- to 12-year-olds, children aren’t getting enough guidance to cope with the emotional demands that social media puts on them.

For instance, many children interviewed for the report were over-dependent on “likes” and comments for social validation, according to researchers. They spoke to 32 children in eight focus groups, each including two friendship pairs, grouped by age and gender. The report says that the friendship pairing was done to enable the children to “open up with more confidence during the research, and to allow for insight around peer dynamics and other social factors to emerge more naturally.”

These are some of the things the kids said about getting social validation from social media:

If I got 150 likes, I’d be like, ‘that’s pretty cool, it means they like you’.

I just edit my photos to make sure I look nice.

My mum takes pictures of me on Snapchat… I don’t like it when your friends and family take a picture of you when you don’t want them to.

I saw a pretty girl and everything she has I want, my aim is to be like her.

Speaking to the BBC, Children’s commissioner for England, Anne Longfield, called on schools and parents to prep children emotionally for what she called the “significant risks” of social media as they move schools and meet new classmates, many of whom have their own phones.

As it is, pretty much everything kids are doing on social media has pluses and negatives. Take, for example, when kids follow their family members. The report cited these positives given by the children they interviewed:

  • I learn what to do and what not to do on social media from my older siblings
  • I can see what my family are doing on my parent’s social media

…and these negatives:

  • I see things that weren’t meant for me to see
  • I don’t understand why my parents need to take pictures of me
  • I worry about how my siblings use social media
  • I don’t feel I have any control over photos when my parents post them/I can’t ask my parents to take them down

The stress starts with older kids, Ms. Longfield told the BBC:

It’s really when they hit secondary school that all of these things come together.

They find themselves chasing likes, chasing validation, being very anxious about their appearance online and offline and feeling that they can’t disconnect – because that will be seen as socially damaging.

She suggested compulsory digital literacy and online resilience lessons for year six and seven pupils (10 – 12 year olds), to teach them about the “emotional side of social media”. She also suggests that parents should help kids to “navigate the emotional rollercoaster” of the negative aspects of social media.

The BBC also spoke with Matthew Reed, chief executive of the Children’s Society, who urged parents to have “open conversations” with their kids about the sites and apps they use:

This can include looking through their ‘friends’ lists together and finding out how their child knows different people.

Check their privacy settings and get children to think about what information and photos they are comfortable with others having access to.

On the plus side, the report found that staying safe online was a priority for the younger children – age 8 to 11 – the researchers interviewed.

Most of the children had strict rules about what they can and cannot share online, which seemed to be a strong reflection of the safety messages they receive from their parents and schools. In this context, ‘safety’ was understood as protecting oneself from strangers, online predators, cyber-bullying and ‘bad’ things people share, such as swearing or violence.

Of central importance was the need to ensure they do not reveal any personal identifiable information, such as where they live or where they go to school, through the images or content they share. Many talked about specific strategies they use to protect themselves, such as never revealing their school uniform or never showing their house number in photos. Some also said they are always careful to make sure the background in their photos doesn’t easily give away what their home looks like.


Source: Naked Security

 

F**CKWIT – the video!

Készült: 2018. január 04
Nyomtatás

By popular demand, we went live on Facebook to discuss the F**CKWIT, aka KAISER, aka KPTI, aka Meltdown, aka Spectre, aka The Intel Bug. (By the way, AMD just confirmed that two of the three published vulnerabilities can be made to work on AMD chips as well.)

Here’s a video to help you decide what to do next…

(Can’t see the video directly above this line? Watch on Facebook instead.)

Note. With most browsers, you don’t need a Facebook account to watch the video, and if you do have an account you don’t need to be logged in. If you can’t hear the sound, try clicking on the speaker icon in the bottom right corner of the video player to unmute.


Source: Naked Security

 

Artificial Intelligence to listen for suicidal thoughts on social media

Készült: 2018. január 04
Nyomtatás

Canada is planning a pilot project to see if Artificial Intelligence (AI) can find patterns of suicidality – i.e., suicidal thoughts or attempts, self-harm, or suicidal threats or plans – on social media before they lead to tragedy.

According to a contract award notice posted by the Public Health Agency of Canada (PHAC), the $99,860 project is being handled by an Ottawa-based AI company called Advanced Symbolics Inc. (ASI). The agency says the company was the only one that could do it, given that ASI has a patented technique for creating randomized, controlled samples of social media users in any geographic region.

The focus on geographic region is key: As it is, the country is reeling after a dramatic spike in suicides in Cape Breton among girls 15 years old and younger and men in their late 40s and early 50s.

The idea isn’t to identify specific individuals at risk of suicide. Nor is it to intervene. Rather, the project’s aim is to spot patterns on a regional basis so that public health authorities can bolster mental health resources to regions that potentially face suicide spikes.

The project is set to begin this month and finish by the end of June, if not before.

First, the PHAC and ASI will work to broadly define these suicide-related behavior terms: ideation (i.e., thoughts), behaviors (i.e., suicide attempts, self-harm, suicide) and communications (i.e., suicidal threats, plans). The next phase will be to use the classifier to research the “general population of Canada” in order to identify patterns associated with users who discuss suicide-related behavior online.

According to CBC News, PHAC says that suicide is the second-leading cause of death for Canadians aged 10 to 19. The news outlet quoted an agency spokesperson:

To help prevent suicide, develop effective prevention programs and recognize ways to intervene earlier, we must first understand the various patterns and characteristics of suicide-related behaviors.

PHAC is exploring ways to pilot a new approach to assist in identifying patterns, based on online data, associated with users who discuss suicide-related behaviors.

Kenton White, chief scientist with ASI, told CBC News that nobody’s privacy is going to be violated.

It’d be a bit freaky if we built something that monitors what everyone is saying and then the government contacts you and said, ‘Hi, our computer AI has said we think you’re likely to kill yourself’.

ASI’s AI will be trained to flag particular regions where suicide may be likely. In Cape Breton, for example, three middle-school students took their lives last year.

White said that there are patterns to be gleaned from Cape Breton’s spike in suicides. The same can be said for patterns that White says have appeared in suicides in Saskatchewan, in Northern communities, and among college students.

ASI CEO Erin Kelly told CBC News that the AI won’t analyze anything but public posts:

We’re not violating anybody’s privacy – it’s all public posts. We create representative samples of populations on social media, and we observe their behavior without disturbing it.

CBC News reports that ASI’s technology could give regions a two- to three-month warning before suicides potentially spike – what could be a vital beacon that government officials could act on by mobilizing mental health resources before the suicides take place.

This isn’t the first time that technology has been applied to suicide prevention. At least as early as 2013, Facebook was working with researchers to put its considerable data mining might to use to try to discern suicidal thoughts by sifting through the social media streams and risk factors of volunteers. Such risk factors include the distinct types of suicide that correlate with factors such as whether the victims were male (making suicide more likely), married (less likely) or childless (more likely).

Facebook and researchers at the Geisel School of Medicine at Dartmouth recruited military veterans as volunteers: a group with a high suicide rate.

At that early stage, Facebook, like HPAC and ASI, didn’t include intervention. The researchers were’t empowered to intervene if suicide or self-harm was flagged.

Since then, Facebook has introduced technologies geared at intervention.

In March 2017, Facebook said it planned to update its algorithms so as to “listen” for people in danger of suicide. The idea was to look out for certain key phrases and then refer the matter to human beings on the Facebook staff, who would then ask whether the writer was OK.

The move followed a similar attempt on Twitter by the Samaritans in 2014. That attempt was aborted in a matter of months as critics lambasted the project’s design due to privacy concerns – it was criticized for enabling stalking, given that users couldn’t opt out.


Source: Naked Security

 

1. oldal / 748

<< Első < Előző 1 2 3 4 5 6 7 8 9 10 Következő > Utolsó >>

Hozzászólások

Események

Nincs esemény létrehozva még.

mySec talk #7 (ITBN)

Atombiztos szervezethez törtek be a hackerek

A Nemzetközi Atomenergia-ügynökség immár hivatalosan is elismerte, hogy az informatikai infrastruktúrájába ismeretlen elk

Hackerek turkáltak a Piwik kódjában

A Piwik.org üzemeltetői egy fontos biztonsági közleményt adtak ki, amelyben jelezték, hogy a weboldaluk feltörését követ

Októberfest az Autorunnak

Az ESET minden hónapban összeállítja a világszerte terjedő számítógépes vírusok toplistáját, melyből megtudhatjuk,

Jövedelmező adathalászat

A kiberbűnözők által készített, gyanútlan felhasználók adatainak megszerzését célzó adathalász weboldalak elszapor

Újabb veszély fenyegeti az androidos telefonokat

Az orosz Doctor Web biztonsági szakértői belefutottak egy új androidos trójai kártevőbe, amely a keresztségben a besz

Alig egy hét múlva Hacktivity

Október 21–22-én, immáron 13. alkalommal gyűlnek össze az etikus

Az Index.hu újságírója lett az „Év információbiztonsági újságírója”

A Hétpecsét Információbiztonsági Egyesület 2006-ban alap&iac

Boldog Új Évet Kívánunk! - 2016.

Az Antivirus.blog nevében Minden Kedves Olvasónknak Egészségben, siker

Ez történt 2015-ben

Nem volt eseménytelen esztendő a 2015. - sem. A legizgalmasabb incidense

Cron Job Starts