Issues I’ve encountered building an app with magic links:
1. Include a fallback sign-in code in your magic link, in case the user needs to log in on a device where accessing their email isn’t practical.
2. Make sure the sign-in link can handle email clients that open links automatically to generate preview screenshots.
3. Ensure the sign-in link works with email clients that use an in-app browser instead of the user’s preferred browser. For example, an iOS user might prefer Firefox mobile, but their email client may force the link to open in an in-app browser based on Safari.
We went to a lot of trouble to make our magic link implementation work with anti-phishing software, corp link checkers and more. https://github.com/FusionAuth/fusionauth-issues/issues/629 documents some of the struggle.
I think that a link to a page where you enter a one time code gets around a lot of these issues.
I arrived at the same conclusion after going through the steps and seeing that some corporate systems mark the login link as malicious, and there’s nothing I can do about it.
Sending a code goes around a lot of issues.
Also: Safari can autofill codes from both email and text messages on macOS and iOS. It then automatically deletes the message too.
https://www.webnots.com/how-to-autofill-verification-codes-i...
Why not just support a password?
Only having magic links gets you a load of stuff for free,
Higher level of security than just user+pass (w/ forgot password)
Email verification
Lifecycle management - in a SAAS when a user no longer has a corporate email, they can defacto not log in, wheras with a user+pass you need to remember to remove their account manually on each SAAS or have integration with your AD (for example)
It’s not a higher level of security than password-based authentication. Why do you state that?
One-time email verification is not the same as security model as magic links. Magic links require instant access. Many security sensitive sites require a time delay and secondary notification for password reset links, which you can’t reasonably do for login links.
Lifecycle management is an interesting point. There are some underlying assumptions that might not hold though—losing an email doesn’t necessarily mean downstream accounts should be auto disabled too. Think Facebook and college emails, for example.
> It’s not a higher level of security than password-based authentication. Why do you state that?
Personally I'm no fan of magic links.
But the people who do like magic links would say the typical 'forgot password' flow is to send a password reset magic link by e-mail. That means you've got all the security weaknesses of a magic link, and the added weaknesses of password reuse and weak passwords.
Of course you can certainly design a system where this isn't the case. Banks that send your password reset code by physical mail. Shopping websites where resetting your password deletes your stored credit card details. Things like that.
It's not about 'liking' magic links;
That means you've got all the security weaknesses of a magic link, and the added weaknesses of password reuse and weak passwords.
Is objectively true. I don't really 'like' magic links but I think they're a very easy to implement and simple to use for infrequently accessed systems. Arguably easier than user/pass and certainly more secure.
Even with those assumptions (which I question), it is only a higher level of security if you assume that your users are reusing passwords, or using low entropy passwords. Neither would apply if they are using a password manager, which a growing majority of web users are.
> It’s not a higher level of security than password-based authentication. Why do you state that?
It could be, depending on how the user has secured their email inbox access. I know I pay a lot more attention to my inbox than some random account. I don't have data, but I think this is true of most people.
I'm also more likely to enable MFA on my email account than I will on every random account I sign up for. And as far as the account providers, I trust the big email providers to be more secure than some random website with an unknown level of security.
You raise some valid points about tying access to a third party and what makes sense. It's not a simple issue.
That's a fair point, but does time delay or secondary notification (the latter could be done for magic links of course) really outweigh the practical security risk of password reuse, attacks, leaks etc? I would argue not, unless the user had a insecure email account for some reason.
Re Lifecycle management; Unless you're also linking a phone number or some other "factor" I think in a traditional user+pass scenario you're also SOL if you lose access to your $Email1 before you update your account to use $Email2, as changing your email to $Email2 would usually send a email to $Email1 to confirm the action. In that case you're in the same position as magic link login + email change functionality. Similarly Lifecycle management only comes for free if you don't implement email change functionality.
It's incredibly annoying to the user though
Oh, we support passwords! (And passkeys, and social login, and OIDC, and SAML.)
Just want to make sure magic links work as well as they can.
Different folks have different requirements, and since we're a devtool, we try to meet folks where they are at.
We actually recently added a feature which lets you examine the results of a login, including how the user authenticated, and deny access if they didn't use an approved method.
One weird reason I've personal run into: when building on the edge, like with cloudflare workers, you can run into timeout issues because of how long password hashing takes.
That should be only if you're on the free plan that has a 10ms time limit. Paid plan gets 30 whole seconds which is plenty of time to hash lots of passwords.
Because ≈everyone reuses passwords and so accounts get taken over.
A majority of internet users (>60% in 2024 and growing) use password managers and don’t reuse passwords.
In my experience 60% seems too high even for supposedly technical users (ref: I work in a dev firm), at least away from their jobs.
I definitely don't believe it for the wiser population (my gut, again based on people I know, says the number is more like 10%, maybe 15). Even the 36% figure on the report on security.org posted above seems dubious, I suspect they have some bias in their survey. Unless that is some people who use the iCloud password manager for some things and no password manager for everything else, so it isn't claiming 36% routinely use a password manager away from a few key accounts.
Do you have a source for that number? 60% seems extremely high based on non-technies I know.
Agreed. I'd be thrilled if it were true, though! Because password reuse (esp without MFA) is a big problem.
This is an extraordinary claim on two counts:
1. Sixty percent seems astronomically high, do you have a source?
and
2. Most "normal" non-tech-savvy people I know who do use a password manager (which I've typically installed for them), are revealed a while later to still use a variation of password reuse : either storing the same password per category of websites, or having a password template they use on all sites, e.g. "IdenticalSecretWord_SiteName"
I don't have the source, but don't think 1Password/LastPass/KeePass. Think the "would you like to save this login" built in to Chrome, Firefox, Edge, Windows, and iOS. These days, you have to opt-out of password management.
Right, use of a Password Manager does not imply they are using Password generation - it may just mean they click "Save this password" after logging in using a re-used password.
I'm surprised. >60% seems high even for tech literate software engineers!
- [deleted]
So what?
1) It means your users will complain that their account was hacked (even if it was their fault) and might cancel their service
2) hackers can exploit your system which hurts you (you are a VPS provider and someone mines crypto and you have to wave it for PR) or you run an email service and someone uses your app to spam (which hurts your email rep) etc.
one times codes are very vulnerable to phishing. users are prone to entering codes on any resembling website
I was gonna argue that you can fix this but I realized that you’re right. It’s a MITM attack where there’s really no way to stop it, same as passwords. It’s basically the same feature (sign in in a different browser) that also lets attackers in.
That said, here’s how I would mitigate it:
- Like usual, time based limits on the code - Code is valid only for the initiating session, requiring the attacker to create a paper trail to phish
If you do have a magic link & want to use code as backup for authenticating a different device/browser, you could:
- Compare IP and/or session cookie between the initiating and confirming window. On match, offer login button. On mismatch, show the code and a warning stating how it’s different, eg ”You are signing in a different device or browser, initiated from $os $browser in $city, $country, $ip - $t minutes ago.”
It’s not perfect though and may still be prone to phishing.
I've had success with sending a code, and the link takes you to a page where the input is pre-filled with the code, and you just have to click "Login".
Yup, that's a good option. Any kind of user action like a form submission is less likely to run afoul of a link checker.
- [deleted]
> I think that a link to a page where you enter a one time code gets around a lot of these issues.
I've done both in my SaaS product - link is GET with the OTP in the link, the target page checks if the link is in the URL, and if not, then the user can type it in.
Only for signup, though. For sign-in, the default is to always have the user type it in.
These days there's also to consider that some Mail Threat Protection Tools (at least Microsoft Defender in Exchange Online does this) click links in Mails to check them.
Recently ran into this issue as new mail accounts got confirmed automatically and magic links were invalid when the user clicked them, because Microsoft already logged in with it during checking.
Come to think of it, magic links by definition violate the principle that GET requests should not change state. Defender & preview tools are actually following the established norms here - norms that were established decades ago precisely because we hit the more broad problem with C, U & D parts of CRUD, and collectively agreed that doing destructive operations on GET requests is stupid.
You can GET a <form> which POSTs when you click the "log in" button.
Yes, but the GET itself isn't changing any state. The state changes only after clicking on the button. This is OP's point.
TeMPOraL said, "magic links by definition violate the principle that GET requests should not change state". That is a reasonable thing to think, but it is not true, because you can GET a <form> which POSTs when you click the "log in" button, unless you think a link to such a <form> page should be excluded from the definition of "magic link".
> unless you think a link to such a <form> page should be excluded from the definition of "magic link".
Yes. Linking to a form requiring user to press a button to submit an actual POST request is one proper way of doing it, and won't confuse prefetchers, previewers and security scanners - but it lacks the specific "magic" in question, which is that clicking on a link alone is enough to log you in.
Can't really have both - the "magic" is really just violating the "GET doesn't mutate" rule, rebranding the mistake we already corrected 20+ years ago.
(EDIT: Also the whole framing of "magic links" vs. passkeys reads to me like telling people that committing sins is the wrong way of getting to hell, because you can just ask the devil directly instead.)
Aha, then we agree on the facts, just disagree about nomenclature.
Your theological analogy is hilarious!
In your example, it seems to me that the POST request is the action that changes the state.
Agreed.
This is the way.
That seems like a really thoughtless idea.
What can you do to prevent automatic confirmation in that case?
I run an authorization service that allows to log-in using magic links and we managed to solve this. First approach was for the link opening GET request to do not log the user in, but to include an HTML page with JavaScript that issued a POST request with a code from the link to log the user in. This worked well for a long time, because email scanners were fetching links from emails with GET requests but did not execute JavaScript on the fetched pages. Unfortunately, some time ago Microsoft tools indeed started to render the fetched pages and execute JavaScript on them which broke the links. What works now is to check if the link is open in the same browser that requested the link (you can use a cookie to do it) and only automatically login the user in these cases. If a link is open in a different browser, show an additional button ('Login as <email address>') that the user needs to click to finish the login action. MS tools render the login page but do not click buttons on it.
The issue that MS tools introduced is broader, because it affects also email confirmation flows during signups. This is less visible, because usually the scanners will confirm emails that the user would like to confirm anyway. But without additional protection steps, the users can be signed up for services that they didn't request and MS tools will automatically confirm such signups.
> check if the link is open in the same browser that requested the link (you can use a cookie to do it) and only automatically login the user in these cases. If a link is open in a different browser, show an additional button ('Login as <email address>') that the user needs to click to finish the login action.
Thanks for checking if it's the same browser. Some companies don't care about that (cough booking cough) so harmful actors just spam users with login attempts in hope a user will click by accident. And puff, random guy gets full access to your account. I got those every day, if I ever needed to login this way I would not be able to figure out which request is mine.
Wouldn't that just log you in on the browser doing the clicking, instead of the attackers browser?
You mean in the booking example? They logged in the browser that... requested access. So basically anyone that knew your login/email.
I think it should check if browser requesting is the same as the one confirming, or just drop that whole dumb mechanism entirely.
Ok, what if an email has "click this link if it was you who tried to log-in", or "if it wasn't you"?
Will Microsoft automatically authenticate malicious actors, or block yourself from services built with assumptions that the email client won't auto-click everything?
Login links from my service were automatically clicked and rendered and I know that other services discovered similar problems. Based on this I think that it is very likely the case with all the links in emails, but I don't know if there is any additional heuristic involved that would treat some links differently.
See also this issue which suggests that all links are opened: https://techcommunity.microsoft.com/discussions/microsoftdef...
Note that this doesn't affect all Outlook users, this Microsoft Defender for Office 365 is a separate product that only some companies use.
That check for the same browser is a great idea. Thanks!
This can be annoying when you intentionally want to log in to a browser that is not the one making the request, such as an in-app browser.
> But without additional protection steps, the users can be signed up for services that they didn't request and MS tools will automatically confirm such signups.
Indeed it's a bad thing but how bad?
The admins of some web service get a database of emails, send them those registration links, make their mail software create the accounts and? They end up with a service with accounts that they could create without sending those emails, before they send some emails to solicit users to perform some action on their (long forgotten?) account. There is no additional threat unless I'm missing something.
The admins have only an extra thin layer of protection because of the confirmation step but I think that any court can see through it.
The exploitation and potential damage would be service specific. Say a Dropbox like service for computer file syncing: An attacker creates an account for 'alice@example.org' and gets the signup email automatically confirmed. The attacker uploads some malware files to the account. After some time Alice attempts to create a valid account and resets password for 'alice@example.com'. Then Alice installs a desktop file syncing client provided by the service and malware files from the attacker get downloaded to her machine.
Another example would be if a company hosted a web app for employees that allowed signups only from @company.com addresses. In such case an attacker could be able to signup with such an address.
You are right. I didn't think about 3rd parties creating accounts on a service they don't control.
It defeats the email verification entirely. If that weren't necessary for something, why would the site require it?
The link leads to a page where you need to press a button (HTTP POST).
As far as I know there’s nothing.
The alternative is to send an OTP in the mail and tell the user to enter that.
In that way there is no link to auto confirm.
However, if you do that ensure that you have a way to jump straight to the page to enter the OTP because (looking at you Samsung) the account registration process can expire or the app is closed (not active long enough) and your user is stuck
> These days there's also to consider that some Mail Threat Protection Tools (at least Microsoft Defender in Exchange Online does this) click links in Mails to check them.
What an insane policy, why am I surprised Microsoft came up with it…
It's not actually insane if the application hosting the link follows the principle that GET requests should not mutate state.
This problem is ~20 years old from when CMS platforms had GET links in the UI to delete records and "browsing accelerator" browser extensions came along that pre-fetched links on pages, and therefore deleted resources in the background.
At the time the easiest workaround was to use Javascript to handle the link click and dynamically build a form to make a POST request instead (and update your endpoint to only act on POST requests), before the fetch API came along.
It is insane because it brings literally nothing security-wise (an attacker can easily detect that the link is being opened from something else than an end-user's browser, and not deliver the payload) while actualy compromising the security of their users (by allowing an attacker to know which addresses exist and which do not, which is very useful if you want to attack companies).
It does not only show to attackers that your address exists, it also shows that it is hosted on Microsoft 365 and is ATP is licensed.
The idea is that the pre-fetching is done by an environment that looks similar to the end-user's browser.
I've had this problem as a user, accidentally previewing a link in iOS by tapping for too long.
This is even worse for copying the link. On iOS the contextual menu comes with the preview, which will destroy the magic link.
4. Make sure the sign-in link on mobile works with your mobile app.
When McDonald's switched from email/password to magic links I had a hard time getting the magic link to work with the McD app. It usually would just open in the McD website.
Thus was quite annoying because about 98% of the time I eat McD's I would not do so if I could not order via the app [1].
I finally gave up and switched to using "Sign in With Apple" (SIWA). There was no way that I could find to add SIWN to an existing McD account, so had to use the SIWA that hides the real email from McD. That created a new McD account so I lost the reward points that were on the old account, but at least I could again use the McD app.
[1] They have a weekly "Free Medium Fries on Friday" deal in the app available for use on orders of at least $1. Almost every Friday for lunch I make a sandwich at home and then get cookies and the free fries to go with it from McD.
McD app is the absolute worst.
1) rooted or bootloader-unlocked Android devices are not allowed (granted it's easy enough to get past it for now but the checks are still there). 2) 2FA requirements as if anyone would bother to steal coupons from others
It appears that they want ordering burgers to have the same level of enhanced security as banking apps. Not even crypto or trading apps bother to block unlocked devices in such a way. Blocking rooted devices doesn't even make banking apps more secure but for them I can at least understand the reasoning.
I have heard that you are basically paying double what you normally would if you aren’t hunting for deals in mcd’s app these days. How much truth is there to that?
A lot. MCD corporate seems determined to get on the user data gravy train, and appears to be subsidizing it for the franchisees.
Three large fries ordered at the counter costs over ten dollars.
It’s not about data, it’s customer segmentation. Frequent customers are more price sensitive, and are willing to use the app to get all the discounts, while occasional customers will not, so they can capture both the more price sensitive part of the market while getting higher margins from occasional buyers.
As someone who spent many years segmenting customers and generating personalized marketing offers -- McDonald's is awful at this. I was a 2-3x/monthly customer (USA based) for years (even more frequent a decade ago, but I'm talking about since the app), ordering the exact same core items every time (except during breakfast).
When they began "value meals" last summer (which don't include their flagship items) they also removed the best deals from the app, the ones that did include Big Mac, QPC, 10-nuggets. I've placed one non-breakfast order in 6-8 months, whenever they started this.
I'm just one person, but if a customer declines from an expected 15-20 visits over a half-year period to 1, and you don't adjust your offer algorithm (and you're the biggest restaurant company in the world so no lack of resources), something is seriously wrong.
Whenever this happens to me I keep wondering how much I am of the A/B data test where I'm in the "less important group". Is it possible that their changes engaged (or profited from) the more active (daily/weekly customers) by making your situation worse?
Perhaps. Let's assume that the value meals is a massive hit and they are collecting far more revenue from customers who like it, than they are losing from people like me.
That's the whole point of data analytics and personalized marketing - even if the value meal works for most people they can still go back to sending me the offers and promotions I responded to previously, in an attempt to reverse my recent decline in spend/visitation. The app makes it possible to send individualized offers. There shouldn't be an entire "B" group where they just say, oh well.
They used to have great deals on the app in Germany. I used to go to McDonald's all the time. The deals suck now, and now I only go if I'm really craving a McMuffin Bacon & Egg.
Whatever they're doing also isn't working for me.
> they also removed the best deals from the app
They've captured the user base with the money that corporate was pumping into the app deals, and are in the process of enshittifying it by transferring the value to themselves instead of the users.
This can work in a lot of industries - I am skeptical fast food is one of them. Switching costs are low, alternates are plentiful, and collecting information (reviewing deals/prices across companies) is relatively easy.
If McDonald's enshittifies its deals while continuing to raise prices, it's way too easy for loyal customers to go elsewhere. I'm saying this as a huge fan and extremely loyal customer of McDonald's for decades... they are at serious risk of losing people like me. As I stated, I've gone from 15-20 visits to 1 since last June/July, whenever they made the big change.