Skip to content

Conversation

@eisenwave
Copy link
Member

@eisenwave eisenwave commented Feb 9, 2026

The current gurantee is excessively high. To remain compatible with 16-bit platforms (where size_t is 16-bit), the size of an object has to be representable as a 16-bit unsigned integer.

The current limit of 262,144 makes C++ effectively unimplementable on 16-bit architectures, unless size_t uses multi-precision arithmetic.

@eisenwave eisenwave added P2-Bug Presentational errors and omissions P3-Other Triaged issue not in P1 or P2 and removed P2-Bug Presentational errors and omissions labels Feb 9, 2026
@eisenwave
Copy link
Member Author

eisenwave commented Feb 9, 2026

And I get that these implementation limits aren't normative; they're merely recommendations.

The way I understand them is

Any reasonable implementation should provide at least these limits, and thus any reasonable C++ user can rely on them.

We do support 16-bit architectures, and they're not some academic hypothetical like modern non-8-bit-byte C++ compilers. It would be entirely unreasonable to provide more than 65K sized-types on 16-bit microcontrollers (or 8-bit microcontrollers with 65K bytes of addressable memory), or to expect them.

@jensmaurer jensmaurer added cwg Issue must be reviewed by CWG. not-editorial Issue is not deemed editorial; the editorial issue is kept open for tracking. labels Feb 9, 2026
@jensmaurer
Copy link
Member

I'm not messing with the quantitative values of implementation limits editorially.

@jwakely
Copy link
Member

jwakely commented Feb 9, 2026

The current limit of 262,144 makes C++ effectively unimplementable on 16-bit architectures, unless size_t uses multi-precision arithmetic.

Absolute nonsense! It just means a 16-bit implementation can document that its limit is lower than the "potential" limit in the informative annex. That's not "unimplementable" at all.

@villevoutilainen
Copy link
Member

The submitter seems to be confused about what was recommended here. The suggestion about submitting editorial PRs was about adding a Note. Not about messing with these values themselves, because while Annex B is informative, that doesn't make changes to it purely editorial.

This is a waste of time, anyway.

@jensmaurer jensmaurer closed this Feb 9, 2026
@eisenwave
Copy link
Member Author

The submitter seems to be confused about what was recommended here. The suggestion about submitting editorial PRs was about adding a Note. Not about messing with these values themselves, because while Annex B is informative, that doesn't make changes to it purely editorial.

No, the note relates to running into implementation limits during reflection and isn't relevant to this PR.

@eisenwave
Copy link
Member Author

eisenwave commented Feb 9, 2026

Absolute nonsense! It just means a 16-bit implementation can document that its limit is lower than the "potential" limit in the informative annex. That's not "unimplementable" at all.

I literally said "unless size_t uses multi-precision arithmetic." So clearly, it is implementable, with a relatively unconventional technique.

And yes, an implementation can declare the limit to be lower; the question is what limit we want to recommend to implementations, considering the architectures that C++ targets. Do we want to recommend to compilers to make size_t 32-bit on a 16-bit microcontroller, and convey to users that this is the usual recommended behavior? Probably not.

@jwakely
Copy link
Member

jwakely commented Feb 9, 2026

I literally said "unless size_t uses multi-precision arithmetic." So clearly, it is implementable, with a relatively unconventional technique.

It's implementable even without using that technique. Just by having a smaller limit

And yes, an implementation can declare the limit to be lower; the question is what limit we want to recommend to implementations, considering the architectures that C++ targets. Do we want to recommend to compilers to make size_t 32-bit on a 16-bit microcontroller, and convey to users that this is the usual recommended behavior? Probably not.

No. Nobody is recommending that, and having a limit that doesn't fit in 16 bits doesn't require that or recommend it. These aren't even recommendations, just "possibilities".

A 16-bit implementation can just have a smaller limit. That's it.

But 16-bit platforms are not the norm, so it doesn't seem helpful to tune the limits for those and then give values which are misleading for the implementations used by the majority of developers.

@villevoutilainen
Copy link
Member

Consider how large objects your #embed will realistically end up generating.

Making the suggested limits even smaller than the too small they already are doesn't seem the most productive of endeavors.

@eisenwave
Copy link
Member Author

eisenwave commented Feb 9, 2026

Honestly I'm not sure it's helpful to even have a potential minimum size for the size of an object because it so heavily depends on the architecture. Most other quantities refer to properties of the compiler, such as the number of case labels supported in a switch.

The maximum size of an object is a quantity that depends on the execution environment/target more than the compiler, and so it's hard to pin down any number that isn't arbitrary.

65K makes a lot of sense because size_t is at least 16-bit IIRC and even 8-bit microcontrollers tend to provide 16-bit addressing (e.g. Intel 8080), so you're really not going to see a smaller object size. The current number of 262,144 seems completely arbitrary and is not a limitation that even a 0.1% of C++ developers will actually encounter. That's not really informative, just misleading and useless. It's stupidly low, but not for any apparent reason such as the limits of size_t or 16-bit architectures.

I'm merely trying to make this number more meaningful; if it's just a "potential" value, then we may as well dramatically increase it to something you will commonly see on a 32-bit or 64-bit architecture, or drop this "potential" value entirely because it's too architecture-specific or arbitrary, no matter what value we choose.

@villevoutilainen
Copy link
Member

Yes, you want it to be more meaningful, you want it to have an actual technical impact, you want it to have teeth.

None of those things are ever done via editorial pull requests.

@AlisdairM
Copy link
Contributor

Beating a dead horse, but I note that the limit was set to 262,144 back in the original C++98 standard, and I suspect reflects a similar value in the C standard of the day. 16 bit architectures were encountered significantly more frequently 28 years ago, it would seem strange to reduce our expected support for today's architectures now.

@jwakely
Copy link
Member

jwakely commented Feb 9, 2026

Yes, I was going to make the same point. I think it's safe to assume that people familiar with 16-bit implementations were involved in choosing the original value, so who is this change supposed to benefit? Implementers of those 16-bit compilers, who have not raised this as an issue in three decades? Users of 16-bit compilers, who are unlikely to be surprised if their compiler has a lower limit than 32-bit or 64-bit compilers? This is just making work for people for no clear benefit.

In any case, even if making a change was desirable, 64k would be wrong. In practice object sizes are limited by ptrdiff_t because you need to be able to represent the difference between a pointer to the first and last bytes, and that's represented in ptrdiff_t. So a signed 16-bit integer is what's relevant here. For example, GCC options like -Wlarger-than, -Walloc-size-larger-than, -Wvla-larger-than etc. all default to PTRDIFF_MAX.

@eisenwave
Copy link
Member Author

eisenwave commented Feb 10, 2026

Yes, you want it to be more meaningful, you want it to have an actual technical impact, you want it to have teeth.

I'm not changing policy or enforcement here, just updating a number in an informative annex. This is spiritually equivalent to updating a line of code in an example code block, or rephrasing a sentence in a note.

Honestly I'm a bit surprised and disappointed at how [implimits] is being treated here. It's treated as inconsequential. A 16-bit implementation can just have a lower limit for the object size, and it wouldn't need to document that. https://eel.is/c++draft/intro.compliance.general#9 merely recommends that implementation limits are documented, but doesn't require it. Ergo, [implimits] is entirely inconsequential; the specific values listed there could live outside the standard, and it would make no difference.

On the other hand, there's unwillingness to make editorial changes, despite this informative annex imposing no requirements whatsoever on the implementation. Personally, I think it would be a waste of time to change these values in the scope of CWG or LWG issues if they are entirely inconsequential.

Seems like a bit of a Catch-22: you can't change them editorially because that's too big of a change, but you can't change them non-editorially because it would be a waste of time.

@jwakely
Copy link
Member

jwakely commented Feb 10, 2026

The change would make the standard worse, whether done editorially or not.

@eisenwave
Copy link
Member Author

eisenwave commented Feb 10, 2026

The change would make the standard worse, whether done editorially or not.

I'm not under the impression that any other value would have found consensus either. Jens doesn't want to change these quantities editorially on principle, and Ville seems to simultaneously consider changes in this area unacceptable to do editorially and also a waste of committee time (non-editorially, presumably).

I've also neither seen anyone argue that we don't want to change 262K because 262K is the best possible value right now, nor that some value other than 65K and 262K would be better.

Do you actually think that a limit of 262K is the most relevant possible value we can give to developers right now, just like in C++98? Do you have any better suggestion?

@jwakely
Copy link
Member

jwakely commented Feb 10, 2026

64k would be wrong as I explained above, so it's not an improvement. 32k would be tiny and a bad choice for the majority of implementations in use today. Since we don't have a better suggestion, and we don't know the specifics of why the current value was chosen, we should not change it. Seems pretty simple to me.

@jwakely
Copy link
Member

jwakely commented Feb 10, 2026

You're just creating work for other people (to consider this change, look into the history of the current value, etc.) for no tangible benefit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cwg Issue must be reviewed by CWG. not-editorial Issue is not deemed editorial; the editorial issue is kept open for tracking. P3-Other Triaged issue not in P1 or P2

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants