mgh95 16 hours ago

Perhaps most telling in this entire report is Table 1. It shows that the non-work has grown 8x in 1 year, whereas work has only ~3.4x. Considering that non-work related usage of ChatGPT now makes up 73% of the requests, ChatGPT is very much in the consumer market, despite substantial marketing of LLM products in a professional context and even as much as compelled usage in some corporations.

Since many consumers are typically relatively tight-fisted in the b2c market, I don't think this bodes well for the long-term economics of the market. This may explain the relatively recent pivot to attempt to "discover" uses.

I don't think this ends happily.

  • CuriouslyC 13 minutes ago

    OAI has a very strong potential play in the consumer devices market. The question is if they approach it right. If OAI developed high end laptops/tablets with deep AI integration, with hardware designed around a very specific model architecture (hyper-sparse large MoE with cold expert marshalling/offloading via NVME), that would be incredibly compelling. Don't forget they've got Jony, it wouldn't just be a groundbreaking AI box, it'd be an aesthetic artifact and status symbol.

  • dolphinscorpion 16 hours ago

    "I don't think this ends happily."

    Still, 700 million users, and they can still add a lot of products within ChatGPT. Ads will also be slapped on answers.

    If all fails, Sam will start wearing "Occupy Jupiter" t-shirts.

    • autoexec 14 hours ago

      > Ads will also be slapped on answers.

      Ads won't be slapped onto answers, my guess is that they will be subtly and silently inserted into them so that you don't even notice. It won't always be what you see either as companies, political groups, and others who seek to influence you will pay to have specific words/phrases omitted from answers as well.

      AI at this point is little more than a toy that outright lies occasionally yet we're already seeing AI hurting people's ability to think, be creative, use critical thinking skills, and research independently.

      • gruez 13 hours ago

        Unlikely. That would be in direct contravention of FTC disclosure rules, which even google adheres to.

        • xphilter an hour ago

          And how long will those rules exist? If people want these rules, they should be made into laws.

        • _aavaa_ 12 hours ago

          Put the disclosure in the footnotes that comes with the link.

        • haijo2 13 hours ago

          It would def get rejected in the EU.

      • MangoToupe 4 hours ago

        I'm skeptical there's much value in this. Politics and ads are both loud and obnoxious for a reason. Subtle product placement and propaganda is not easy in text.

    • mgh95 15 hours ago

      And friendster at one point had over 100m users. A gross margin (and more importantly, positive cash flow) business is more important than users. This data is not a good indicator of either.

      • og_kalu 15 hours ago

        They have literally hundreds of millions of users that are completely free. Not google search or facebook free, but free free, and only suffer a few billion in losses. Inference is cheap and their unit economics is fine. There is literally no business that would be making profit under those constraints. If they need to make profit, they can implement ads and that will be that.

        • mgh95 15 hours ago

          In 2024 (when customer mix was more favorable) they lost 5B on 10 in forward looking ARR.

          They aren't pulling an Amazon snd balancing cash flow with costs. They're just incinerating money for a low value userbase. Even at FB arpu the economics are still very poor.

          • og_kalu 14 hours ago

            >In 2024 (when customer mix was more favorable)

            Okay, so still hundreds of millions of users

            >They aren't pulling an Amazon snd balancing cash flow with costs.

            Nobody said they were. I said having hundreds of millions of completely free users would suck the profitability of any business, and that the remedy would be simple, should the need for it arise.

            >They're just incinerating money for a low value userbase.

            If you don't see how implementing ads in a system designed for having natural conversations to users whose most common queries are for “Practical Guidance” and “Seeking Information” could be incredibly valuable then you have no foresight and I don't know what to tell you.

            >Even at FB arpu the economics are still very poor.

            No they aren't and I honestly have no idea what you're talking about. Inference is cheap and has been for some time.

            • mgh95 14 hours ago

              I don't think you realize the issue. They aren't monetizing their SaaS product satisfactorily -- hence the Amazon cash flow imbalance statement. This indicates they must find new markets to survive. Despite this, however, they are gaining only in poorer markets, limiting the monetizability of a high cost product.

              Implementing adds is a hail-mary. It puts them in a knife fight with google which will likely result in a race to the bottom which OpenAI cannot sustain and win.

              FB global ARPU is about 50 USD. At 700M customers, they do 35B in revenue annually. This compares to a publicly stated expected cost of approximately 150B in computing alone over the next 5 years (see: https://fortune.com/2025/09/06/openai-spending-outlook-115-b...). This leaves a profit of 5B per year, with 90B expected r&d costs. Even if OpenAI develops a product and fires all employees, you are looking at a ROIC of about 18 years.

              Fundamentally, OpenAI does not have the unit economics of a traditional SaaS. "Hundreds of millions of users" is hundreds of millions of people consuming expenses and not generating sufficient revenue to justify the line of business as a going concern. This, coupled with declining enterprise AI adoption (https://www.apolloacademy.com/ai-adoption-rate-trending-down...) paints an ugly picture.

              • ares623 8 hours ago

                Facebook users spend multiple hours per day doomscrolling. Operational costs of a doomscrolling user is minimal. Most of it will be served from a CDN.

                Imagine 700M users “doomchatting” with GPT5 for several hours per day to justify the ROI of advertising.

              • haijo2 13 hours ago

                V nice post. As a corporate finance and valuation enthusiast - I approve.

              • og_kalu 13 hours ago

                >Despite this, however, they are gaining only in poorer markets

                They are gaining everywhere. Some more than others, but to say they are only gaining in poorer markets is blatantly untrue.

                >FB global ARPU is about 50 USD. At 700M customers, they do 35B in revenue annually.

                Yeah, and that would make them healthily profitable.

                >This compares to a publicly stated expected cost of approximately 150B in computing alone over the next 5 years

                Yes, because they expect to serve hundreds of millions to potentially billions more users. 'This leaves a profit of 5B per year' makes some very bizarre assumptions. You’re conflating a future-scale spending projection with today’s economics. That number is a forward-looking projection tied to massive scale - it doesn’t prove current users alone justify that spend, and they clearly don't. There is no reality where they are spending that much if their userbase stalls at today's numbers, so it's just a moot point and '5B per year' a made up number.

                >Fundamentally, OpenAI does not have the unit economics of a traditional SaaS.

                Again, Everything points to their unit economics being perfectly fine.

                • menaerus 5 hours ago

                  There's one thing you're missing - inference is not cheap. HW is not cheap. Electricity is not cheap. This is w/o R&D. They show that, in average, they recorded ~2.627B requests/day. This is ~79B requests/month or ~948B requests/year. And this is only for the consumer ChatGPT data, Enterprise isn't included AFAICT. Each request translates to the direct cost that could be even roughly estimated.

                  • og_kalu 3 hours ago

                    No inference is pretty cheap, and a lot of things point to that being true.

                    - Prices of API access of Open models from third-party providers who would have no motive to subsidize inference

                    - Google says their median query is about as expensive as a google search

                    Thing is, what you're saying would have been true a few years ago. This would have all been intractable. But llm inference costs have quite literally been slashed several orders of magnitude in the last couple of years.

                    • menaerus 2 hours ago

                      You would probably understand if you knew how LLMs are run in the first place but, as ignorant as you are (sorry), I have no interest in debating this with you anymore. I tried to give a tractable clue which you unfortunately chose to counter-argue with non-facts.

                      • og_kalu an hour ago

                        Touting the requests per day is pretty meaningless without per query numbers, but sure, I'm the one that doesn't understand. What people with no incentive to subsidize are charging is about as fact as it comes but sure, lol.

                        I've replied to you once man. Feel free to disengage but let's not act like this has been some ongoing debate. No need to make up stories.

                        • menaerus 35 minutes ago

                          Which is why I said that it can be roughly estimated. And it can be roughly estimated even without those numbers assuming a fleet of some size X and assuming the number of hours this fleet is utilized per day, for the whole year. Either way, you will end up with a hefty number. Do the math and you'll see that inference is far from being cheap.

                          • og_kalu 6 minutes ago

                            Of course all the requests is a hefty number, they're serving nearly a billion weekly active users. What else would you expect ? Google search, Facebook - those would all be hefty numbers. The point is that inference is pretty cheap per user, so when they get around to implementing ads, they'll be profitable.

                            Again, there are many indicators that inference per user is cheap. Even the sheer fact that Open AI closed 2024 serving hundreds of millions of users and lost 'only' 5B should clue you in that inference is not that expensive.

                • mgh95 11 hours ago

                  No, the economics are horrible. At current 30Y T-bond rates, your money doubles ever ~15 years. Your money grows faster in USD treasuries then OpenAI. That's disastrous.

                  • og_kalu 3 hours ago

                    You're moving the post now. Stop making up numbers and substituting them as fact. What you think Open AI's returns will be is your opinion, and not a reason to justify 'poor unit economics'.

                  • simianwords 5 hours ago

                    Why do you think the economics are horrible?

                    • mangamadaiyan 3 hours ago

                      They've explained why. Now why do you think the economics are not horrible?

                      • simianwords 2 hours ago

                        I have the same question as my sibling comment. Where did they get the numbers?

      • bix6 15 hours ago

        But if they slap on ads their margin improves so it is possible no?

        • euLh7SM5HDFY an hour ago

          Yes, but only if the users can't move to competition. Just look at YouTube, there were relatively few ads until all other video sites were dead.

        • mgh95 15 hours ago

          Yeah but the problem then becomes you are in a knife fight with google. Welcome to margin compression on already thin margins and high capex. Its not a like they buy commodity hardware that is cheap, or have the depth of talent like Google to do ASICs + DC management.

          Once OpenAI turns to ads, I think it's an indicator they are out of ideas.

          • babelfish 14 hours ago

            You could have said this about Meta 15 years ago, and now they're a ~$2 trillion dollar company!

            • haijo2 13 hours ago

              Meta (FB, IG, WA) and Google Search arent really comparable... OAI and Google def are though.

              • MangoToupe 4 hours ago

                Google has a monopoly on search (maybe not as seen by courts, but certainly practically). OpenAI is in a commodity brawl with dozens of companies. Still, if anything saves their ass it's that "chatgpt" is synonymous with "chatbot".

      • standardUser 15 hours ago

        No one ever paid for social media, or were expected to in the future.

        • mgh95 14 hours ago

          Nobody expected the total computations from social media they expect from LLM services. And that's the point: users are irrelevant, even large enterprises with advantageous cost structures can die with poor management.

    • empiko 7 hours ago

      IMO adding ads is not going to be that easy. It is relatively easy to implement ads when the user is scrolling through tons of content and the ads are "organically" injected into the stream. But if the user is seeking a specific answer in a chat-like GUI, what will you do exactly? Whatever ad you will show will have to be visually distinguished and the user will just scroll past to get to the "true" answer they want. Sure, you will still get the product in front of some eyes, but I would expect this to be less effective than other social-media based ads.

      • justincormack 3 hours ago

        Buying recommendations and taking a cut on purchases are more natural than ads, and they are already going this way.

      • morgoths_bane 7 hours ago

        I suspect that if they are to ever implement ads I think they would be using existing methods and technologies. So in Duolingo if you finish a lesson you have an ad at the end, I can imagine after 5 prompts or whatever you get a full screen ad that you cannot skip out of. Also they may use other things like banner ads in addition to ads embedded in the scroll feed. They also may do something like what Meta did with having the option to share the AI slop with other users, and there is where I would suspect the ads to be placed as well. I do not think we will see if you ask ChatGPT to give you directions for the Heimlich that it will also tell you about a great deal at Del Taco. I do not think these corpos are going to let some clanker dictate what the ad is, they want full control on their message, so they're going to use techniques that are well understood and have metrics that can be well understood to their customers.

  • dzink 2 hours ago

    Replace AI with electricity and the argument looks very different. I think the whole industry is going the Utility route over time. When electricity or railroads or shipping containers or other similar large infrastructure-cost systems were first released the value unlocked for the smallest most profitable customers expanded consumption far more than it did for the large users at the beginning. In electricity for example: Few could have predicted data centers, or crypto, or electric cars boosting demand at the start. As soon as something becomes cheaper with scale (which is what AI companies are going for) the consumption skyrockets as tech catches up. The utility down side is obviously guaranteed monopoly eventually and potentially government involvement, or in this case possibility of AI becoming a chunk of the government as well. Especially with social media content steering votes (text generation really being a tool to steer human opinions) and power, and public funding as a result.

    • TriangleEdge an hour ago

      I think AI will also enable the discovery of psychopaths and narcissists, so the dystopia mentioned is uncertain. When AI will confidently boil down someone to some labels like this, we may get competent leadership for the first time ever.

      • macintux an hour ago

        Openly being psychopaths & narcissists hasn't impeded today's autocrats. Apparently it's a positive indicator for many.

  • resfirestar 14 hours ago

    The statistic is from ChatGPT consumer plans, so I don't think it says anything useful about enterprise adoption of LLM products or usage patterns in those enterprise contexts.

    • adeelk93 13 hours ago

      Exactly. Enterprise use has a carveout for analytics - so it wouldn’t be in the paper’s population anyways

  • EagnaIonat 9 hours ago

    I think the data might be skewed.

    They only analyze the consumer plans, and ignored Enterprise, Teams and Education plans.

  • DenisM 15 hours ago

    Consumers have low friction on the way in and on the way out. Especially when media hype gets involved.

    Business have higher friction - legal, integrations, access control, internal knowledge leaks (a document can be restricted access but result may leak into a more open query). Not to mention the typical general inertia. This friction works both ways.

    Think capacitive vs induction electric circuits.

    • ares623 7 hours ago

      Consumers do have very very high friction with chatbots. As clearly demonstrated by the gpt5 update and the loss of gpt4.

      • lm28469 2 hours ago

        Meanwhile I have 5 free accounts on each of the big platforms that I rotate whenever I hit a limit.

    • mgh95 15 hours ago

      I don't see how friction is the primary driver here. ChatGPT is available through the most enterprise sales channel available -- Azure. The Microsoft enterprise sales engine is probably the best in the world.

      Similarly, if costs double (or worse, increase to a point to be close to typical SaaS margins) and LLMs lose their shine I dont think there will be friction on the way out. People (especially executives) will offer up ChatGPT as a sacrifice.

  • andy99 16 hours ago

    If people find it useful but enterprise adoption is lagging, doesn't that indicate there's still a big upside?

    On the other hand, I remember when BlackBerry had enterprise locked down and got wiped out by consumer focused Apple.

    In any event, having big consumer growth doesn't seem like a bad thing.

    It will be bad if it starts a race to the bottom for ad driven offering though.

    • majormajor 11 hours ago

      > If people find it useful but enterprise adoption is lagging, doesn't that indicate there's still a big upside?

      It could indicate that many people find it more of an entertainment product than a tool, and those are often harder to monetize. You've got ads, and that's about it, and puts a probable cap on your monthly revenue per user that's less than most of the subscription prices these companies are trying to get (especially in non-USA countries).

      (I find it way more of a tool and basically don't use it outside of work... but I see a LOT of AI pics and videos in discord and forums and such.)

    • ares623 15 hours ago

      It’s been shoved down enterprise throats for months/years. Shareholders, CEOs, workers (at the start) and users (at the start) have never had such a unified understanding in what they want than this AI frenzy. All stars were aligned for it to gain more traction. And yet…

      It’s the prodigal child of tech.

    • mgh95 15 hours ago

      When Apple sells a device, they get more revenue with minimal coats turbocharging revenue and profits.

      When OpenAI sells a ChatGPT subscription, they incur large costs just to serve the product, shrinking margins.

      Big difference in unit economics, hence the quantization push.

  • lispisok 13 hours ago

    OpenAI makes a profile of you based on your chat history and people are far more personal with these things than Google search. It's gonna be a goldmine when they decide to use that profile to make money.

  • vonnik 10 hours ago

    As google has shown, consumer/business market is not either/or.

  • ares623 16 hours ago

    Looks like they only included actual chats and not agentic/copilot usage. IMO that makes the study quite incomplete.

    • mgh95 16 hours ago

      The chats alone are backbreakingly costly relative to the market mix of ChatGPT.

      Rest of the market be damned -- combined with the poor customer mix (low to middle income countries) this explains why there has been such a push by the big labs to attempt to quantize models and save costs. You effectively have highly paid engineers/scientists running computationally expensive models on some of the most expensive hardware on the market to serve instructions on how to do things to people in low income countries.

      This doesn't sound good, even for ad-supported business models.

      • haijo2 13 hours ago

        Yep and lets not forget, those people are incredibly price sensitive.

        Is there enough product differentiation between OAI and Gemini? Not that I can see. And even if it was a low price, thats not the point - people hate paying a penny for something they expect to be free.

        By the time OAI has developed anything that enables them to acquire and exercise market power (profitably), they will have ran out of funding (at least on favourable terms). Which could cause key talent to leave to competitors and so on. Essentially a downward spiral to death.

      • ares623 16 hours ago

        I also wonder how much of those "writing" assistance is for propaganda, troll farms, or scams. Such value. $500B well spent.

  • qwerty_clicks 9 hours ago

    Work could Be dropping from the limit of ChatGPT in the workplace and use of CoPilot in a secured Microsoft tenant.

  • standardUser 15 hours ago

    LLMs are the next ISPs, and those households who haven't yet found room for it on their monthly budgets soon will. And much like ISPs, i'd expect to see the starting $20/mo evolve over time into a full size utility bill. Not all households, of course, but at utility-scale nonetheless.

    • gdhkgdhkvff 2 hours ago

      The difference is ISPs usually have monopoly/duopoly pricing power and LLMs already have freely available open source models. If one AI company decides they want to start gouging, they have to compete with other providers AND open source. And if all of the ai companies start colluding on price gouging, there’s always the option of new competitors cloud hosting open source models.

      That said I do think eventually prices will increase somewhat, unless SOTA models start becoming profitable at current prices (my knowledge is at least 6 months old on this so maybe they have already become profitable?)

PeterStuer an hour ago

Absolute worst by far I have encountered is people using ChatGPT to self diagnose their presumed psychological conditions.

Ofc ChatGPT goes in hard to syncopanthically confirm all 'suggestive' leads with zero pushback.

LeicaLatte 11 hours ago

I think the 73% non-work usage ratio will flip again within 2-3 years, but not because consumer usage shrinks. As AI becomes embedded in workflows through APIs the "work" category is set to expand dramatically.

qwerty_clicks 9 hours ago

I commonly switch between chatgpt, perplexity, and copilot. Whatever is closest to my mouse or shortcut. Copilot is clearly the worst of the three but I have not true loyalty or most of the time, care. I suspect I am getting weak model responses from perplexity at times but it’s good enough to keep moving fast. Sam mentioned brining memory to people, not just because it’s what ppl want but I suspect my it will help to lock ppl into one platform of snowballing context.

kristopolous 8 hours ago

This is crazy. I was having a conversation earlier where I divided the use cases and my taxonomy was almost identical: I didn't include "writing help" but everything else. Then I guessed the trends trends and nailed it with respect to the other of usage.

I mean how often to you make fairly speculative claims and then an hour later see a just published report on it and get it validated? Nuts.

I personally hate chatgpt's voice (writing style) but I guess that's a minority position.

lazyant 16 hours ago

[flagged]

  • Cheer2171 14 hours ago

    Delete this comment, then delete your account.

  • nerevarthelame 13 hours ago

    I get that not everyone wants to read a 62-page paper. But this has an abstract, a conclusion, and an accompanying blog post that each serve the same purpose as this summary. Just use the existing, better materials if you're not willing to go in depth.

  • LeoPanthera 15 hours ago

    Don't do that. If I wanted to read slop I could generate it myself.

    • senectus1 13 hours ago

      i think that's the point of the article.. people dont want to generate their own slop (read an article and be forced to form an informed opinion) they want it made for them (article summarized so they can skim an opinion and move on to the next slop)

  • jamiek88 14 hours ago

    Relevant user name.

daviding 16 hours ago

Not going to read all that.. ;)

> ChatGPT is widely used for practical guidance, information seeking, and writing, which together make up nearly 80% of usage. Non-work queries now dominate (70%). Writing is the main work task, mostly editing user text. Users are younger, increasingly female, global, and adoption is growing fastest in lower-income countries

  • dgfitz 16 hours ago

    > Users are younger, increasingly female, global, and adoption is growing fastest in lower-income countries

    Young moms with no money in poor countries use this product the most. I bet that was fun news to deliver up the chain.

    • fragilerock 42 minutes ago

      That's funny, the way I interpreted this sentence is that usage was already high in older, male, and high-income countries so most of the new users are coming from outside these demographics. Which, ironically, is the exact opposite of what you're saying.

    • Gigachad 13 hours ago

      Surely this user base can make back the hundreds of billions of dollars they invested in it.

      • klibertp an hour ago

        If they mostly ask how to raise their children and follow the received advice... Then yeah, in some 20 years we'll see what kind of return we get. People raised on social media are one thing; people raised by (with the assistance of) ChatGPT may be even worse off because of it.

    • standardUser 15 hours ago

      A strong foothold among an ambitious, educated, technologically-connected cohort in emerging economies? Yes please.

      • kingstnap 14 hours ago

        No amount of LinkedEn speech can fix the poor part of it.

        In 2025, it's abundantly clear that the mask is off. Only the whales matter in video games. Only the top donors matter in donation funding. Modern laptops with GPUs are all $2k+ dollars machines. Luxury condos are everywhere. McDonalds revenues and profits are up despite pricing out a lot of low income people.

        The poor have less of the nothing they already have. You can make a hundred affordable cars or get as much, if not order of magnitudes more, profit with just one luxury vehicle sale.

      • dgfitz 13 hours ago

        You have no idea if they’re ambitious or educated. Absolutely no idea. Is it just commonplace to inject “facts” into conjecture? Comes off as desperate.