Thursday, December 27, 2018

A band-aid for Twitter's horribly broken security

If you manage a high-value Twitter account, consider creating a second, "burner" account. After enabling multifactor authentication on the high-value account, add the same phone number to the burner account. This will turn off SMS access features for the high-value account, without breaking MFA on the same. 
Updated December 31: Added a description of the variations between mobile app, mobile web UI, and desktop web UI, along with a bug Kevin Beaumont pointed out (described at the end of this post).

On Christmas Eve, Richard De Vere of The AntiSocial Engineer published a doozie of an article describing a serious flaw in Twitter’s security. In a nutshell, if a Twitter account has a phone number connected to it, Twitter accepts instructions via SMS from that phone number, with no additional authentication required.


It gets worse – far worse. Twitter requires a phone number be connected to an account in order to enable multifactor authentication. Twitter does support using a mobile security app or a physical key for MFA, and allows you to turn off SMS-based 2FA, but requires a phone number to be connected to the account nonetheless. Removing the phone number also turns off "logon verification" (Twitter's term for multifactor authentication).


Removing a phone number from Twitter also turns off multifactor authentication

Meaning, a user security-aware enough to set up two-factor authentication to protect their Twitter account, is also opening a back door into their account, a back door that allows functions including follow, unfollow, tweet, retweet, like, DM, turn on or off push notifications, or remove the phone number from the account.


And since Twitter 2FA requires a phone number, sending a “stop” message to Twitter from (or spoofing) the number associated with an account, will disable 2FA on that account, with no notice to the rightful account owner.


That's right: enabling 2FA on Twitter, explicitly enables an SMS back door to Twitter, which can be used to disable 2FA on Twitter, without you knowing that 2FA has been disabled.

Tuesday, December 4, 2018

The most challenging aspect of security

Ever wondered what is the most challenging aspect to security? It's not understanding the evolving threats and actors. Certainly those are important, but people smarter than me do a fine job of tracking and reporting on emerging threats.

It's not the constant evolution of tools and blinky boxes. Sure, tools are part of the mix, and knowing what tools will benefit in what situations is a must, but a tool is a tool. Given the right tool with a suitable understanding of the problem, the right people can figure out the right way to use it.

It's not understanding the technologies and solutions I'm tasked with defending. Of course that is crucial, but 20 years in the field have taught me a great bit about operating systems, applications, networking, business, and the way systems work, break, and can be fixed.

The biggest challenge? It's not threats, blinky boxes, or foundational knowledge. It's the context switching. It's being eyeball deep into a topic when something else demands attention. It's the interrupt-driven pace of work, always at the mercy of the next unscheduled threat.

What techniques do you use to carve out dedicated time for strategic work? How do you avoid the pitfall of perpetual firefighting? Comment below or join the discussion on Twitter.

Tuesday, August 7, 2018

On teaching kids to make good security and privacy choices themselves

February 10, 2019: Since writing the below post, I've learned of a technique that is used to get around Instagram's obscuring unsolicited direct messages. 

Instagram in general will blur DM'ed images from strangers, with a message asking if the recipient wants to accept the message. It's a simple and sometimes-effective way to reduce unwanted sexual images (more often than not sent to female accounts). To get around that, some lowlifes will begin a DM conversation benignly, engaging their mark in innocent conversation. After the target has accepted the (so-far above-board) DM, the abuser sends obscene images that are not obscured because the sender is now "known." The abuser keeps a clean "public" profile and only engages in abusive behavior through DM; since the abusive content is sent by DM, Instagram staff either cannot or will not (it's unclear which) view the content to act on abuse reports.

Educate your children that even if the conversation seems innocuous, you never really know who is on the other end of an Internet conversation.

If you or your child have received such unsolicited obscene material, you can report it to the FBI's Internet Crime Complaint Center (IC3) at https://www.ic3.gov/complaint/default.aspx/

If the recipient is under the age of 16, you can also report it to the National Center for Missing and Exploited Children (NCMEC) at https://report.cybertip.org/ 

In both cases, a screen capture of the obscene DM that includes the sender's name and/or profile alias will help preserve evidence if the abuser later deletes the DM.



Over the years I've written several posts on raising security-conscious kids.
A trend in my writing, as well as in my parenting, has been that as they grow up, my approach has evolved from technical controls to educating them to make good choices themselves. A recent conversation with my high school daughter highlights why that is.

My middle daughter maintains an active Instagram account. A household rule is, if your social media account is public, don't post anything personally identifiable; if you want to post personal stuff, keep your account private. This is a rule that gradually loosens as they grow older and can make informed decisions. As my daughter has gradually shifted from private to somewhat public, she recently was asked if she would be a "brand ambassador" for a company. 


We discussed some of the dangers and abuses a teenage girl would face as her exposure grew (abuses I have little first-hand experience with, but that I am well aware of through conversations with many of you). Her response was both shocking and encouraging: 
"Dad, I already deal with all of that. I just block and report them. Besides, Instagram obscures DM'ed photos unless I accept the request."
While not the response I expected, and not a topic I would have ever thought relevant in the not-too-distant past, I have to admit that's a pretty mature response. 

The moral? Technical controls can only go so far; as kids grow into teenagers and fledgling adults, they need the tools and skills to look after themselves.

Monday, February 12, 2018

Using malware's own behavior against it

A quick read for a Monday night.

Last week while investigating some noisy events in my security monitoring system, I noticed two competing Windows features filling up event logs: link-local multicast name resolution (LLMNR) put lots of name resolution requests onto the local network segment, which Windows firewall promptly blocked.

LLMNR is the successor to NetBIOS Name Service. Both serve the same purpose: if a computer cannot resolve a name through DNS, it essentially yells out on the local network "hey, anyone know an address for xyzzy?" 

This sounds like a reasonable solution, but it invites abuse. If an adversary has a foothold on my network, they can either listen for and reply to common typos, or can actively interrupt the legitimate DNS and instead give their own answers. In either case, the adversary can provide fake addresses for servers and websites, directing users to malicious places (and possibly stealing usernames and passwords along the way).

Generally speaking, I recommend turning off LLMNR and NBNS, as well as using a trusted DNS provider that prevents access to known-malicious websites.

Today I came across a slick way to use such malware's own behavior against it. LLMNR "responder" malware replies to requests with a bogus address, so they generally respond to *any* request. So Respounder spits out bogus name requests and looks for responses.

Wednesday, January 24, 2018

Seeing isn't believing: the rise of fake porn

The following may be disturbing to readers, but I feel it is important to write for several reasons. The first is, to stay a step ahead of cyberbullies that could use this technology to humiliate others. The second is to give readers - especially parents and teens - information to consider when deciding what to share publicly, privately, or at all.

In late 2016, software maker Adobe showcased an audio-editing tool that could, given a speech sample, create a natural-sounding recording of that person. This capability could come in very handy for editing podcasts or narrations, allowing a producer or sound engineer to edit the spoken text instead of re-recording. 

Last summer, a University of Washington research project demonstrated the next logical step. They were able to take a video recording of a public speech, replace the audio portion with a recording saying something else entirely, and manipulate the video so the speaker's face and mouth movements matches the new audio.

Faking someone's spoken words is one thing. But technology publication Motherboard wrote today of a new and disturbing practice gaining steam in the last six weeks or so: so-called "face-swap" porn, an artificial intelligence-aided merging of celebrity faces onto the bodies of porn actors, to create convincing videos that appear to be of that celebrity.

In the article (warning: NSFW, and unsettling content) Motherboard writes of individuals taking benign video from celebrities' public Instagram stories, and transferring the faces onto nude Snapchats posted by others. Using freely available software and step-by-step instructions, the technique can be accomplished by even a novice computer user. 

My fear is that it won't stop with celebrities. The thought of someone taking video from my daughter's Instagram, and creating a believable fake video with which to humiliate her, shakes me to the core, as it should any parent.

So why write this?

The first reason is to counter would-be cyberbullies. My hope is that a fake video - even an extremely convincing fake - might be less traumatic if it is widely known that such fakes are no longer fantasy. 

The second reason is to give you food for thought when it comes to privacy decisions. What you (or your child) post publicly, may be seen by - or downloaded and abused by - anyone. 

There is no one-size-fits-all solution when it comes to privacy and safety, but I'll share how I have approached this with my kids. When my children first began using social media, our household rule was that a social media account could be either public, or personal, but never both.

If the child wanted to share publicly, it had to be under a pseudonym and never include pictures of them, their family members, pets, or home. If the child wanted to identify themselves, the account had to be private and only shared with friends they (and we) knew in real life. 

As they and their situational awareness have grown, we have given them more discretion, but you can bet this development is the subject of discussion in our home.

Friday, January 12, 2018

It's W2 scam season


Time for a short Friday afternoon social engineering‍ discussion. If you work in HR / finance / benefits, you'll want to stick with me.

It's January, the beginning of tax season in the US (and I presume, other countries as well). Employers in the US are required to provide W2 statements documenting pay and tax to their employees by the end of his month.

Scammers know this, and love to exploit this annual ritual. The common schemes I see are an email or phone call pretending to be from either a company executive (often the CEO or CFO), or from the taxing authority, with an urgent request for employee records.

Urgent because, a sense of urgency can short-circuit skepticism and get an employee to respond before thinking.

Oddly, even though employers must provide this data by January 31, W2 scams have tended to peak around March for the last few years. Perhaps there's a psychological element since individual tax returns are due by April 15 so it remains top of mind for the HR/finance/benefits/payroll employee.

If you work in HR / finance / payroll / benefits, or otherwise have access to employee personal data, stay vigilant over the next 90 days or so. Be suspicious of any request for employee records, especially if it comes in an unusual manner.

Take the time to verify the request through a trusted channel. Depending on your organization size, that might mean in person, over the phone, or via an established business process.

DON'T ship a CSV or XLS of employee data simply because someone - even the CEO - sends an email requesting such.

If you own or manage a business, or manage those that have access to employee records, be sure they know how employee records are handled, and know the appropriate process for requesting and approving transfer of that data.

If there is no established process for handling employee records - make one, and stick to it.