Popular shared stories on NewsBlur.
2774 stories
·
61998 followers

Millions of Accounts Vulnerable due to Google’s OAuth Flaw ◆ Truffle Security Co.

1 Comment

Millions of Americans can have their data stolen right now because of a deficiency in Google’s “Sign in with Google” authentication flow. If you’ve worked for a startup in the past - especially one that has since shut down - you might be vulnerable.

I demonstrated this flaw by logging into accounts I didn’t own, and Google responded that this behavior was ‘working as intended’.

The Root Cause: How Domain Ownership and OAuth Intersect

Here’s the problem: Google’s OAuth login doesn’t protect against someone purchasing a failed startup’s domain and using it to re-create email accounts for former employees. And while you can’t access old email data, you can use those accounts to log into all the different SaaS products that the organization used. 

I purchased just one of these defunct domains and discovered that logging into each of the following services granted us access to old employee accounts:

  • ChatGPT

  • Slack

  • Notion

  • Zoom

  • HR systems (containing social security numbers)

  • More…

The most sensitive accounts included HR systems, which contained tax documents, pay stubs, insurance information, social security numbers, and more.

Interview platforms also contained sensitive information about candidate feedback, offers, and rejections.

And of course, chat platforms contained direct messages, and all sorts of sensitive information that an attacker should never get their hands on.

What’s the Scale of this Vulnerability?

Here are a few facts:

  • 6 million Americans currently work for tech startups.

  • 90% of tech startups eventually fail.

  • 50% of those startups rely on Google Workspaces for email.

I went through Crunchbase’s startup dataset and found over 100,000 domains currently available for purchase from failed startups.

If each failed startup averaged 10 employees over their lifetime and used 10 different SaaS services, we’re talking about accessing sensitive data from more than 10 million accounts.

To understand the issue, let's take a quick look at Oauth:

When you use the "Sign in with Google" button, Google sends the service (e.g., Slack) a set of claims about the user.

An example of a default set of claims.

These claims usually include:

  • hd (hosted domain): Specifies the domain, e.g., <a href="http://example.com" rel="nofollow">example.com</a>.

  • email: The user's email address, e.g., [email protected].

The service provider (e.g. Slack) would use one or both of these claims to determine if the user can log in.

The HD claim could be useful to say “Anyone at <a href="http://example.com" rel="nofollow">example.com</a> can log into the <a href="http://example.com" rel="nofollow">example.com</a> workspace”

And the email claim is used to log users into their specific account.

Here’s the issue: If a service (e.g., Slack) relies solely on these two claims, ownership changes to the domain won’t look any different to Slack. When someone buys the domain of a defunct company, they inherit the same claims, granting them access to old employee accounts.

Why Doesn’t The SUB Identifier Solve this?

I have worked with a few of these downstream providers to look for a solution. There is a documented unique user identifier (the sub claim) that could theoretically prevent this issue, but in practice, it's unreliable.

According to a staff engineer at a major tech company:

“The sub claim changes in about 0.04% of logins from Log in with Google. For us, that's hundreds of users last week”.

Because the sub claim is inconsistent, it cannot be used to uniquely identify users - leaving services reliant on the email and hd claims.

Proposed Fix

To resolve this issue, Google could implement two immutable identifiers within its OpenID Connect (OIDC) claims:

  1. A unique user ID that doesn’t change over time.

  2. A unique workspace ID tied to the domain.

I opened up a vulnerability ticket in Google’s security vulnerability disclosure program outlining the problem, presenting a proof of concept account takeover, and proposed the addition of these OIDC claims.

Google promptly closed the issue out as “Won’t fix”:

They also classified the issue as a “Fraud and abuse” issue, rather than an Oauth/login issue.

I thought this would be the end of the story, but 3 months later, they re-opened my ticket (after my Shmoocon talk was accepted), paid a $1337 bounty, and said they were working on a fix.

Here is the timeline:

  • Reported to Google - Sep 30, 2024

  • Google marks as won’t fix - Oct 2, 2024

  • Shmoocon talk accepted - Dec 9, 2024

  • Google re-opens issue - Dec 19, 2024

I asked for details about what the fix would look like (e.g. are they going to add two new OIDC claims?), but there was no information they were able to share. 

What can Downstream Providers do to mitigate this?

At the time of writing, there is no fix.

To the best of our knowledge, downstream providers (e.g. Slack) cannot protect against this vulnerability unless Google adds the two proposed OIDC  claims. 

As an individual, once you’ve been off-boarded from a startup, you lose your ability to protect your data in these accounts, and you are subject to whatever fate befalls the future of the startup and domain.

Many providers, which allow you to join the overall workspace if the domain matches, regardless of your email, will then return the full list of users.

This user list can then be brought back into the Google workspace, and used to populate all the old employees, which can then be recursively used to log into more and more accounts.

Secondary Concerns: Password Reset Takeovers

You may be wondering: What about users who used a username and password instead of Google SSO? Could attackers reset passwords via email from the old domain?

Short answer: Yes, this is another risk, but there are mitigations:

  1. Startups should disable password-based authentication and enforce SSO with 2FA.

  2. Service providers should require additional verification (e.g., SMS codes or credit card verification) for password resets.

These measures reduce password based risk, but don’t address the issue of domain-based OAuth vulnerabilities.

Conclusion

There’s a fundamental vulnerability in Google’s OAuth implementation. Without immutable identifiers for users and workspaces, domain ownership changes will continue to compromise accounts.

Google’s eventual re-engagement with this issue is promising, but until a fix is implemented, millions of Americans' data and accounts remain vulnerable.

Here's a link to the Shmoocon talk (it starts around 5:30:00):

Read the whole story
acdha
32 minutes ago
reply
“Won’t fix until Schmoocon”
Washington, DC
Share this story
Delete

Homomorphic Encryption in iOS 18

1 Share
in
blog
tags
date
1/10/2025

PenPen's Note: This article is written with the intent of accessibility to non-maths folk who possess some computer knowhow. It comes in wake of the shit storm following Jeff Johnson’s recent “Apple Photos phones home on iOS 18 and macOS 15”. There’s a lot of confusion and curiosity about how this technology works, along with criticisms lobbed at Apple’s densely packed published research. The goal of this post is to distill that research into a more understandable package, so that you can make more informed decisions about your data. “Nowhere does Apple plainly say what is going on”, but maybe I can.

You are Apple. You want to make search work like magic in the Photos app, so the user can find all their “dog” pictures with ease. You devise a way to numerically represent the concepts of an image, so that you can find how closely images are related in meaning. Then, you create a database of known images and their numerical representations (“this number means car”), and find the closest matches. To preserve privacy, you put this database on the phone.

All of this, as cool as it might sound, is a solved problem. This “numerical representation” is called an embedding vector. A vector is a series of coordinates in a very high dimensional space. One dimension might measure how “dog-like” a thing is. Another might measure how “wild-like” a thing is. Dog-like and wild-like? That’s a wolf. We can compare distances using algorithms like cosine similarity. We are quite good at turning text into vectors, and only slightly worse at doing the same for images.

But then, your database grows. Your users don’t want all dogs, they want golden retrievers. You can no longer fit this database on a device. You’re tempted to store this database on your servers, and send the numerical representation computed on device off to them. This should be fine: vectorization is a lossy operation. But then you would know that Amy takes lots of pictures of golden retrievers, and that is a political disaster.

The promise of homomorphic encryption

Another thing we’re pretty good at is encryption. Encryption enables us to take a series of bits and scramble them, such that an observer (someone without a key) cannot read the original value. When the correct key is applied, the original value is restored.

For encryption to work, small changes to the input must change the output in unpredictable ways. If this wasn’t the case, an attacker could gradually refine an input, with the goal of creating increasingly similar encrypted outputs. When the outputs match, the attacker knows the original value.

Unfortunately, encryption as we know it is of little use to us. If we encrypt our vector before we send it, Apple’s servers cannot read the value of the vector. If Apple’s servers cannot read the value of the vector, then they do not know what database entry is most closely located to our vector (if they do know, then our encryption failed). Servers cannot tell us things they do not know. Therefore, all this was for waste.

This conjecture sounds concrete, but one of the statements is false. What if I told you that servers can tell us things they do not know? Enter homomorphic encryption.

The premise is as follows: the client sends the server an encrypted value. The server cannot read the value. The server can modify the value, but it cannot know the new value resulting from this modification. In essence, the server is operating with a blindfold.

Take addition. You are given unknown value P, and you add known value Q to it. You can deduce that the resulting value is equal to P+Q, but you do not know what P+Q is, nor do you know P. The client decrypts the value using it’s key, and obtains the result of P+Q. Since the client also knows the value of P, it can backsolve for Q.

There are two main operations that hold in a homomorphic scheme:

  • Homomorphic additionE(P) + E(Q) = E(P + Q)
  • Homomorphic multiplicationE(P) * E(Q) = E(P * Q)

Operations typically reserved for plaintext can now be performed on ciphertext!

There are a number of complexities associated with homomorphic encryption, such as the accumulation of noise. Supporting a truly arbitrary number of operations is quite difficult, but if you can support both gates with arbitrary depth, you have fully homomorphic encryption. The actual maths behind this is quite complex, and will unfortunately need to be out of scope. There’s some interest in creating compilers for homomorphic encryption, and our code samples will be Rust, loosely based on the Sunscreen compiler for simplicity. Concrete is likely a much more robust option, with a higher learning curve.

What do homomorphic programs look like?

Below is a simple homomorphic program that multiplies two encrypted values:

#[fhe_program]
fn multiply(a: Cipher<Signed>, b: Cipher<Signed>) -> Cipher<Signed> {
 a * b
}
fn main() {
 let (public_key, private_key) = runtime.generate_keys()?;
 // client encrypts value using its public key. This encrypted value can only be decrypted using the private key. This is called asymmetric encryption.
 let client_value = runtime.encrypt(Signed::from(8), &public_key)?;
 // The private key isn't sent to the server, so the server cannot decrypt the '8'
 let res = server(public_key, client_value);
 // client uses its private key to decrypt the value. result = 24
 let result = runtime.decrypt(res, &private_key)?;
}
// the server does not receive the private key, and thus cannot decrypt the result nor client_value
fn server(public_key: PublicKey, client_value: Cipher<Signed>) -> Cipher<Signed> {
	// server encrypts a new value using the user's public key
 let server_value = runtime.encrypt(Signed::from(3), &public_key)?;
 // server runs 'multiply' using both the client_value and the server_value, and returns the response
 runtime.run(multiply, vec![client_value, server_value], &public_key)?
}

This example is quite simple. It gets much more complicated when you need to perform a number of operations, such what may be required by private information retrieval, as you need to structure the query as some primitive mathematical operations.

More specifically, HE programs tend to only support add and mul instructions by their nature. Comparisons and Modulus are very difficult. So you need to structure your query in very particular ways.

For instance, to retrieve a value from a database, the query can be structured a vector of length n, where n is the size of the database. The query is a vector of 0s, with a 1 at the index of the value you want to retrieve. Then, you perform a dot product with the database, and all values except the one you want to retrieve will be zeroed out:

// [0, 0, 1, 0, 0] * [10, 20, 30, 40, 50] = [0, 0, 30, 0, 0]
// => (0 + 0 + 30 + 0 + 0) = 30
#[fhe_program]
fn lookup(query: [Cipher<Signed>; DATABASE_SIZE], database: [Signed; DATABASE_SIZE]) -> Cipher<Signed> {
 let mut sum = query[0] * database[0];
 for i in 1..DATABASE_SIZE {
 sum = sum + query[i] * database[i]
 }
 sum
}

You may be thinking that this sounds computationally expensive. You would be correct. In terms of bandwidth, you can reduce the length of inputs. For instance, if you structured the database in two dimensions, the query could encode the row and the column separately, and then you have a query size of 2sqrt(n), without any leaks. You may, however, be paying the price in execution cost.

PenPen's Note: With sunscreen, branching cannot depend on function parameters, and comparisons with those parameters isn’t supported. Not even ==. This is because under the hood, everything you do is just algebra with polynomials. By hand, you can implement boolean logic, however. Jeremy Kun writes that “Given this power, you can encrypt your data bit by bit, express your program as a boolean circuit—an XOR gate is addition and an AND gate is multiplication—and simulate the circuit. Since XOR and AND form a universal basis for boolean logic, you can always decompose a circuit this way”. Also, with branching, you might wonder if branches can leak information. Yes. They can. Hence, the worst case scenario must be executed every time. All if branches must be executed, and all loops must reach their upper bounds (this also means that the bounds must be statically known)

Cosine similarity is more difficult, as it is defined with division and norm operations, but you can normalize the vectors and then do simple scalar products inside the homomorphic program. Preprocessing really is the name of the game.

let norm = (vector1.iter().map(|x| x * x).sum::<f64>()).sqrt();
let normalized_vector: Vec<Signed> = vector1.iter().map(|x| Signed::from(x / norm1)).collect();

We have no way to simply return the best match. At best, we can return scores for each entry in the database, and the client can then decrypt the scores and find the best match:

For each query, the server response contains all the entries in the cluster. […] In Wally, we utilize lattice-based, somewhat homomorphic encryption (SHE) to reduce the response overhead. […] The server computes the distance function between the encrypted information and the cluster entries under SHE and returns encrypted scores. This reduces the response because the size of encrypted scores is significantly smaller than the entries.

Apple’s implementation: Wally

PenPen's Note: This section provides a high level overview of the Scalable Private Search with Wally paper.

Unfortunately, Apple’s implementation of homomorphic encryption is not as pure as what we’ve discussed above. Apple must balance both privacy and performance, which are at odds with each other (HE programs run many orders of magnitude slower than their equivalents).

Before we get to Apple’s take on HE, let’s take a step beck. A non-private implementation of this search would look like this:

  1. At initialization, the server separates its documents into clusters of similar documents.
  2. The client picks the cluster that best matches the query.
  3. The client sends its vector to the server.
  4. The server returns the similarity score for each document in the cluster.
  5. The client picks the best entry.
  6. The client requests the metadata for the index of the best entry.

This has the following security flaws:

  1. The server can read the vector. Embeddings can be quite revealing.
  2. The client reveals the nearest cluster. This is a less significant issue, but it can be used to infer the query. For instance, a query with an embedding matching “dog” would likely be in the “animal” cluster.
  3. To fetch the relevant metadata, the client sends the index of the closest entry to the server. This would also be a privacy leak!

Hiding the embedding: back to homomorphic encryption

The embedding vector is by far the most sensitive piece of information that is currently being leaked.

Fully homomorphic encryption is too slow to be used by Apple, so they use somewhat homomorphic encryption. SHE supports both addition and multiplication logic, but only to a certain depth. Each math operation increases depth, as it increases the amount of noise. Parameters have been chosen that balance security and noise budgets. Obviously it would be ideal not to be working under these constraints. Unfortunately, FHE just isn’t fast enough yet.

Here are the implemented operations, where ct means E(v) and v is a vector:

SHE operationResultTime (ms)Noise (bits)
PtCtAdd(ct, v’)E(v + v’)
CtCtAdd(ct, ct’)E(v + v’)0.0040.5
PtCtMult(ct, v’)E(v ⊙ v’)0.0220
CtCtMult(ct, ct’)E(v ⊙ v’)2.526
CtRotate(ct, r) for r ∈[0, n/2)0.50.5

In the pesudocode above, the query is a vector of many individually encrypted values. In Apple’s implementation, it appears the whole vector is fit into a singular value:

Overall, in our implementation, the client query and server response are a single RLWE ciphertext of sizes 226kB, and 53kB, respectively, (the former with evaluation keys)

The optimizations they do to the similarity math is described later in the paper, but I’m not gonna lie — it went over my head.

Hiding the nearest cluster

Of the three leaks, the cluster is the least significant. If the embedding is leaked, a huge amount about the content of the photo is revealed. The breed of dog. The color of the grass. The vibe. If the best match is leaked, the server knows we likely have a photo of a golden retriever. If the cluster is leaked… well it’s an animal of some sort. Apple chose to make some privacy sacrifice for the sake of performance, opting for a technique called differential privacy.

Apple uses an ohttp anonymization network operated by a 3rd party to proxy requests to Wally. This means that it is impossible to know what device a specific request comes from — it all blends together. In addition, the following mitigations are put in place:

  1. Clients issue a number of fake queries, then discard the result.
  2. Queries are clumped together by “epoch”. Within each epoch, a fixed number of users make queries, and their queries are processed at the end of the epoch. The queries are also sent in a random order at random times, hopefully eliminating timing side channels.

Jeff Johnson rightfully notes that this scheme is still somewhat flawed:

The two hops, the two companies, are already acting in partnership, so what is there technically in the relay setup to stop the two companies from getting together—either voluntarily or at the secret command of some government—to compare notes, as it were, and connect the dots?

Metadata is our third leak. The solution is really quite simple. Instead of querying a cluster for the metadata at one index, the cluster returns the metadata for all indexes stored in that cluster.

PenPen's Note: This seems like a lot of data the client is getting anyway. I don’t blame you for questioning if the server is actually needed. The thing is, the stored vectors that are compared against are by far the biggest storage user. Each vector can easily be multiple kilobytes. The paper discusses a database of 35 million entries divided across 8500 clusters.

If the metadata is too big, the same techniques detailed in this article can be used for private information retrieval instead of private nearest neighbor search, which is what we’ve focused on up till this point.

Discussion

Before I go any further, I want to make it clear that I am just a hobbyist, not any sort of an expert on this subject matter. I first learned of homomorphic encryption while reading Jeff Johnson’s recent “Apple Photos phones home on iOS 18 and macOS 15” and later posts, and what precedes is 10 or so hours worth of research. I do not contest anything that he wrote.

One natural way of understanding privacy is as synonymous with secrecy. According to this interpretation, if my data is private, then nobody except me can read my data […] The right to privacy can also mean the right to private ownership. […] With Enhanced Visual Search, Apple appears to focus solely on the understanding of privacy as secrecy, ignoring the understanding of privacy as ownership.

I myself, having put in the time to piece together a huge pile of scattered information, have decided I like the feature and will leave it enabled. With that being said, technical understanding is no substitute for consent, which should have been requested by Apple along with a proper explanation.

"What happens on your iPhone, stays on your iPhone."

Apple once said that “What happens on your iPhone, stays on your iPhone.” Obviously, this is not entirely the case, and it wasn’t the case long before Apple began using homomorphic encryption.

But, what information is really leaving your device? There is no trust me bro element. Barring some issue being found in the math or Apple’s implementation of it, for the first time the cloud is able to act as a sort of extension to your device and your data, which is an immensely exciting proposition. Apple has managed categorise photos without knowing anything about what they contain. How cool is that.

Had this project came first, before the commoditization of the smartphone and social media, I would’ve written something about a slippery slope to less and less careful use of the cloud. But we’re already living in a world where all our data is up there, not in our hands. This project represents a small step back in the right direction, and I cannot get over how cool it is. I just wish that Apple would be more upfront.

We live in amazing times.

P.S. If you made it this far, you clearly like math, and if you clearly like math, you’ll clearly like my app, Maculate!

📝 References & Recommended Reading

👟 Footnotes

Read the whole story
acdha
37 minutes ago
reply
Washington, DC
Share this story
Delete

Cracks Form Around Letting RFK ‘Go Wild’ - TPM – Talking Points Memo

1 Share

Amid reports that the adults in Donald Trump’s room may be convincing him to put some guardrails in place on his HHS nominee — whom he vowed to let “go wild on health” — there are also reportedly some cracks forming around the nominee himself within Republican circles.

But it’s not clear if the anti-RFK contingent of Trump allies is yet strong enough to actually make a dent, let alone imperil, RFK Jr.’s ability to be confirmed as HHS secretary.

Earlier this month, Sen. Bill Cassidy (R-LA), an actual medical doctor and the chair of the Senate Health, Education, Labor and Pensions Committee, which would be responsible for holding an RFK Jr. confirmation hearing, began making noises about his reluctance to hand over HHS to an anti-vaxxer like RFK. He told Fox News that Kennedy is flat “wrong” on vaccinations and his belief that they’re dangerous and cause autism in America’s youth.

“I will meet with him this coming week,” he said on Fox News Sunday on Jan. 5. “I look forward to the interview. I agree with him on some things and disagree on others. The food safety, I think the ultra-processed food is a problem.”

“Vaccinations, he’s wrong on, and so I just look forward to having a good dialogue with him on that,” Cassidy said.

After meeting with Kennedy on the Hill, Cassidy did not exactly change his tune. Per Politico Playbook:

Yesterday, Sen. BILL CASSIDY (R-La.) — who in addition to chairing the HELP Committee is a medical doctor — met with Kennedy and offered an unenthusiastic-but-diplomatic statement, saying he “had a frank conversation” and spoke with Kennedy “about vaccines at length.”

On Wednesday, more Republican opposition to RFK’s nomination arose, this time due to his hard-to-pin-down stance on abortion, which has ranged from supporting efforts to pass federal Roe protections to claiming he supported non-existent “full-term” abortions to agreeing to back any Republican effort to pass federal abortion restrictions in Congress.

A group founded by former Vice President Mike Pence, Advancing American Freedom, put out a letter Wednesday calling on Republican senators to oppose RFK’s nomination over his abortion positions, calling them “completely out of step with the strong, pro-life record of the first Trump Administration.” The letter was first reported by the conservative Daily Wire.

“While RFK Jr. has made certain overtures to pro-life leaders that he would be mindful of their concerns at HHS,” the AAF said in their letter, “there is little reason for confidence at this time.”

The letter criticized Kennedy as being “pro-abortion,” citing his past support for abortions later in pregnancy. This position is “completely out of step with the strong, pro-life record of the first Trump Administration,” the group wrote.

“Whatever the merits of RFK Jr’s Make America Healthy Again initiative — indeed, whatever other qualities a nominee might possess — an HHS Secretary must have a firm commitment to protect unborn children, or else bend under the pressure and pushback surrounding these daily, critical decisions,” AAF President Tim Chapman and Board Chairman Marc Short, Pence’s former chief of staff, wrote in a letter to senators. 

“While RFK Jr. has made certain overtures to pro-life leaders that he would be mindful of their concerns at HHS, there is little reason for confidence at this time,” they wrote.

As head of HHS, Kennedy would have some jurisdiction over the Trump administration’s abortion maneuverings, as both funding for Planned Parenthood and the FDA’s approval of mifepristone fall under HHS’s purview.

Catch up on our live coverage of Pam Bondi’s hearing before the Senate Judiciary Committee here: Pam Bondi Up For Grilling

The latest from Khaya Himmelman on the North Carolina Supreme Court election and Republican efforts to overturn it: North Carolina Republican Tries New Strategy For Stealing State Supreme Court Race

Supreme Court Mulls Letting 5th Circuit Ignore Precedent It Doesn’t Like In Texas Porn Case

Trump Pulls A New Fixation Out Of Thin Air And House GOP Runs Off The Cliff With It

RFK Jr. Admits He Didn’t Come Clean on Anti-Vax Fortune 

Paul Manafort Is Back to Wreck the World

Few US adults confident Justice Department and FBI will act fairly under Trump, AP-NORC poll finds

Read the whole story
acdha
39 minutes ago
reply
Washington, DC
Share this story
Delete

Why This OnlyFans Model Posts Machine Learning Explainers to Pornhub

1 Comment

Advertisement

Justice Alito should watch this Pornhub video about calculus.

Read the whole story
acdha
53 minutes ago
reply
The future is weirder than anyone predicted
Washington, DC
Share this story
Delete

Democrats and the Gig Economy - TPM – Talking Points Memo

1 Comment
One must-read delivered daily to your inbox
 Member Newsletter

There’s a cottage industry of takes these days on how Democrats can again become the “party of the working class.” Many of those are reactive, defensive, operate on misleading or ill-considered concepts of what the 21st century working class even is. But today I had one of these pop into my inbox that I read and thought, yeah, that makes a lot of sense. The gist is that Democrats should make themselves the party of gig workers. The title of the article is “Champion the Self-Employed.” But as author Will Norris explains, the demographic and economic profile of those technically categorized as “self-employed” has changed pretty dramatically in recent years. It still includes the generally high-earning and disproportionately white and male consultants and solo operators of various sorts. But as a group it’s now much, much larger — especially in the wake of the pandemic — and is more female and less white. It’s also much lower income, more precarious.

Join TPM and get The Backchannel member newsletter along with unlimited access to all TPM articles and member features.

Read the whole story
acdha
1 hour ago
reply
“It was only recently that I first heard the phrase, or first registered the phrase, “subsistence entrepreneurs,” which helped me think more clearly about the mix of hustle, precarity and exposure to the various tech monopolies. Norris cites research showing that probably at least 15% of the workforce is properly categorized as independent/self-employed and more expansive definitions may put that number at well over a quarter of the workforce. That’s a lot of people.”
Washington, DC
Share this story
Delete

Microsoft Will Not Support Office on Windows 10 After October 14

1 Comment
Microsoft will stop supporting its Microsoft 365 (formerly known as Office 365) desktop applications on Windows 10 after October 14, the day the company is retiring the old operating system, it said.
Read the whole story
jepler
4 hours ago
reply
harsh.
Earth, Sol system, Western spiral arm
Share this story
Delete

The future of AI-powered work for every business | Google Workspace Blog

1 Comment

Read the whole story
acdha
5 hours ago
reply
Translation: “we know you haven’t found our AI features worth paying for so we’re no longer giving you the choice”
Washington, DC
Share this story
Delete
Next Page of Stories