-
Notifications
You must be signed in to change notification settings - Fork 805
[implimits] Reduce size of an object to 65,535 #8737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
And I get that these implementation limits aren't normative; they're merely recommendations. The way I understand them is
We do support 16-bit architectures, and they're not some academic hypothetical like modern non-8-bit-byte C++ compilers. It would be entirely unreasonable to provide more than 65K sized-types on 16-bit microcontrollers (or 8-bit microcontrollers with 65K bytes of addressable memory), or to expect them. |
|
I'm not messing with the quantitative values of implementation limits editorially. |
Absolute nonsense! It just means a 16-bit implementation can document that its limit is lower than the "potential" limit in the informative annex. That's not "unimplementable" at all. |
|
The submitter seems to be confused about what was recommended here. The suggestion about submitting editorial PRs was about adding a Note. Not about messing with these values themselves, because while Annex B is informative, that doesn't make changes to it purely editorial. This is a waste of time, anyway. |
No, the note relates to running into implementation limits during reflection and isn't relevant to this PR. |
I literally said "unless And yes, an implementation can declare the limit to be lower; the question is what limit we want to recommend to implementations, considering the architectures that C++ targets. Do we want to recommend to compilers to make |
It's implementable even without using that technique. Just by having a smaller limit
No. Nobody is recommending that, and having a limit that doesn't fit in 16 bits doesn't require that or recommend it. These aren't even recommendations, just "possibilities". A 16-bit implementation can just have a smaller limit. That's it. But 16-bit platforms are not the norm, so it doesn't seem helpful to tune the limits for those and then give values which are misleading for the implementations used by the majority of developers. |
|
Consider how large objects your #embed will realistically end up generating. Making the suggested limits even smaller than the too small they already are doesn't seem the most productive of endeavors. |
|
Honestly I'm not sure it's helpful to even have a potential minimum size for the size of an object because it so heavily depends on the architecture. Most other quantities refer to properties of the compiler, such as the number of The maximum size of an object is a quantity that depends on the execution environment/target more than the compiler, and so it's hard to pin down any number that isn't arbitrary. 65K makes a lot of sense because I'm merely trying to make this number more meaningful; if it's just a "potential" value, then we may as well dramatically increase it to something you will commonly see on a 32-bit or 64-bit architecture, or drop this "potential" value entirely because it's too architecture-specific or arbitrary, no matter what value we choose. |
|
Yes, you want it to be more meaningful, you want it to have an actual technical impact, you want it to have teeth. None of those things are ever done via editorial pull requests. |
|
Beating a dead horse, but I note that the limit was set to 262,144 back in the original C++98 standard, and I suspect reflects a similar value in the C standard of the day. 16 bit architectures were encountered significantly more frequently 28 years ago, it would seem strange to reduce our expected support for today's architectures now. |
|
Yes, I was going to make the same point. I think it's safe to assume that people familiar with 16-bit implementations were involved in choosing the original value, so who is this change supposed to benefit? Implementers of those 16-bit compilers, who have not raised this as an issue in three decades? Users of 16-bit compilers, who are unlikely to be surprised if their compiler has a lower limit than 32-bit or 64-bit compilers? This is just making work for people for no clear benefit. In any case, even if making a change was desirable, 64k would be wrong. In practice object sizes are limited by |
I'm not changing policy or enforcement here, just updating a number in an informative annex. This is spiritually equivalent to updating a line of code in an example code block, or rephrasing a sentence in a note. Honestly I'm a bit surprised and disappointed at how [implimits] is being treated here. It's treated as inconsequential. A 16-bit implementation can just have a lower limit for the object size, and it wouldn't need to document that. https://eel.is/c++draft/intro.compliance.general#9 merely recommends that implementation limits are documented, but doesn't require it. Ergo, [implimits] is entirely inconsequential; the specific values listed there could live outside the standard, and it would make no difference. On the other hand, there's unwillingness to make editorial changes, despite this informative annex imposing no requirements whatsoever on the implementation. Personally, I think it would be a waste of time to change these values in the scope of CWG or LWG issues if they are entirely inconsequential. Seems like a bit of a Catch-22: you can't change them editorially because that's too big of a change, but you can't change them non-editorially because it would be a waste of time. |
|
The change would make the standard worse, whether done editorially or not. |
I'm not under the impression that any other value would have found consensus either. Jens doesn't want to change these quantities editorially on principle, and Ville seems to simultaneously consider changes in this area unacceptable to do editorially and also a waste of committee time (non-editorially, presumably). I've also neither seen anyone argue that we don't want to change 262K because 262K is the best possible value right now, nor that some value other than 65K and 262K would be better. Do you actually think that a limit of 262K is the most relevant possible value we can give to developers right now, just like in C++98? Do you have any better suggestion? |
|
64k would be wrong as I explained above, so it's not an improvement. 32k would be tiny and a bad choice for the majority of implementations in use today. Since we don't have a better suggestion, and we don't know the specifics of why the current value was chosen, we should not change it. Seems pretty simple to me. |
|
You're just creating work for other people (to consider this change, look into the history of the current value, etc.) for no tangible benefit. |
The current gurantee is excessively high. To remain compatible with 16-bit platforms (where
size_tis 16-bit), the size of an object has to be representable as a 16-bit unsigned integer.The current limit of 262,144 makes C++ effectively unimplementable on 16-bit architectures, unless
size_tuses multi-precision arithmetic.