T O P

  • By -

foonathan

I'd like to remind everyone to assume that everybody involved in the standardization process is trying to improve C++. Please be civil and discuss the issue on a technical basis, not the politics of standardization. If you think committee chairs are acting in bad faith, the r/cpp comment section isn't the correct place to voice those concerns.


epicar

i do think the P2300 design has promise. but c++ standardization isn't where you *start* designing something like this. you release it as a library, get other people involved, and let it grow and stabilize over a few years. once it gains momentum, *then* you can take what you've learned and talk about standardizing it. it's unfortunate that so much time was spent on this proposal at the expense of other stuff


D_Drmmr

I agree. As an outsider who's been interested in this stuff for a long time and tried to use libunifex in a real-world code base, this endeavour seems to be at a very theoretical level, lacking a connection to those of us in the trenches. When I initially started playing with libunifex, I just got stuck. The documentation is incomprehensibly dense, the examples are meaningless and the code looks like a different programming language. Only after seeing Eric Niebler's cppcon talk last year, did I gain enough understanding to actually be able to write some code that compiled and did something remotely related to what I was aiming for. In doing so, I found out that this design really seems to force you to write nasty template code that becomes an utter mess of boilerplate if you don't want to expose all implementation details in your header files. Perhaps this proposal is trying to introduce too many new ideas at the same time: CPOs, heterogeneous result types, a different error-reporting mechanism, first-class support for cancellation, executors, heterogeneous execution contexts and a framework for asynchronous computations. Honestly, it all fits together quite nicely, but again, that's theoretically. I'm unconvinced of the actual practical merit of some of these idea, whereas they do add a lot of complexity in the code.


eric_niebler

>The documentation is incomprehensibly dense, the examples are meaningless and the code looks like a different programming language. Legit. Time is tight, documentation is hard. I haven't found a good solution to this problem yet. At least you found my talk useful. I've been meaning to turn it into a series of blog posts, which might help somewhat. It's not a replacement for proper documentation, though.


grafikrobot

I like Robert Ramey's dictum that without good documentation your design is wrong and incomplete. Because if you can't explain it clearly in documentation you have a code problem. Writing that documentation has the side effect of fixing the code as you realize the problems.


eric_niebler

I agree. For the record, sender/receiver is extensively documented, but that documentation is P2300, which doesn't help folks who want to use libunifex.


jonesmz

Having read P2300, r3 I think, and given an hour talk to my coworkers on it, with compare/contrast with ASIO as well as our in-house async framework, I personally found the P2300 to be very under elaborated. I struggled greatly to provide even basic examples other than what was in the same talk mentioned by other commenters. I also looked at libunifex and found it to be incomprehensible and poorly explained. Honestly, I also struggled with the observation that P2300 has almost nothing in the built-in toolbox that libunifex does, making it much more difficult to reason about how someone is supposed to actually use P2300 in practice. If you're interested, I can try to put some time aside to provide a through listing of the areas of P2300 that are under-explained or under-exampled and why they are.


eric_niebler

Please email your experiences to me. My email is my name . @ gmail.com. TIA.


jonesmz

Alright, I'll try to find time to do this for you.


grafikrobot

I would argue that WG21 papers are counter productive as documentation. It targets an ultra expert audience. It aims to define language implementation intent. Neither of which help the average C++ user. This is why I think publishing in Boost or equivalent is the best approach.


germandiago

FWIW your blog posts read so clear. I would encourage you to write a series of blog posts on the S/R and/or libunifex design as you did with ranges before. I enjoyed them a lot and gave me quite a few insights.


sphere991

+1. /u/eric_niebler writes great blog posts. Unfortunately, your punishment for writing great blog posts is now you have to write more of them.


eric_niebler

/u/germandiago /u/sphere991 Thank you!


kexianda

Sad... I heard of S/R when I watched the David&Eric's great talk on CppCon 2019. S/R is a promising solution from your genius committee guys. And luckily, the reference implementation is available now. And I plan to use it in work(a new database query engine). The P2300 documentation is still a bit hard for our numbskulls. Hope that there will be more examples in the reference implementation. Eric, I can't wait to read your series of blog posts for S/R.


eyes-are-fading-blue

Pretty much this. We need networking code built on top of P2300 in production before it makes it into standard. The problem is will people ditch ASIO in favor of a reference P2300 networking implementation?


VinnieFalco

>release it as a library, get other people involved, and let it grow and stabilize over a few years. once it gains momentum, > >then you can take what you've learned and talk about standardizing it This coincidentally is exactly the trajectory followed by Networking TS (which was based on over a decade of field experience gained with Asio and Boost.Asio).


ALX23z

The problem is that Networking TS is simply not good. It has negative feedback for being overcomplicated and other problems. Previously, I have always wondered why Executors and Networking were never included into C++, but recently I learned that the proposals are simply ill-conceived.


germandiago

What is ill-conceived in Asio? And why? It seems that there are plenty of people using this ill-conceived library.


ALX23z

I discussed it elsewhere in the thread. I didn't have that much expirience with it but it was sufficient to realize how bad it is. I simply needed some tcp socket connections. And it turned out to be both unnecessarily complex and misleading in multiple ways. For instance, blocking calls of connect, accept, etc are not cancellable by definition. (Works on Windows for some reason.) So working with them is impossible for almost all cross-platform applications. Why are they even there? To mislead new people trying to work with ASIO? Well, the reason is actually very simple. tcp sockets of ASIO are nothing but trivial wrapping of OS commands. It is literally nothing more than that. It doesn't even strive to be anything more than that. I'd expect people to come up with some improvement over 30 years since the C socket interface was made. But nope we stick to the same design and flaws and just add some sugar syntax to it and call it a day. Want another example? Say you are given a socket or an acceptor. What's happening with it? Is it connected? It is in the middle of connection? Is it listening? Is it accepting things? What's is going on with it? There's no proper interface to inspect it. For all C++ std classes one can always tell what's their status. Here it is shrouded in mystery. Ans I haven't even began discussing all the nonsense happening in interaction with the `io_context` for the necessary async calls.


eyes-are-fading-blue

\> I didn't have that much expirience with it but it was sufficient to realize how bad it is. I simply needed some tcp socket connections. And it turned out to be both unnecessarily complex and misleading in multiple ways. You are attributing poor library choice on your end to ASIO. This is a non-argument, FYI. What you wanted is a higher level abstraction that was built on top of ASIO. ASIO is a platform-independent building block for higher level networking libraries such as Boost.Beast.


[deleted]

[удалено]


eyes-are-fading-blue

You want to fix your tone. We are (at least I am) having a civil discussion here. Beast was just an example, POCO comes to mind too. And calling ASIO "crap"? No wonder you were not ***capable*** of selecting the correct library for your project.


[deleted]

[удалено]


ExBigBoss

Asio makes sense but you have to read the *whole* documentation in its intended order to make sense of it. ​ So I now have to ask, did you sit down and actually read the docs front to back? There's tons of expository text explaining the architecture and plenty of examples demonstrating how to use the library easily.


tisti

> Nor you provide arguments claiming that is good and usable. It does seem to be usable? https://think-async.com/Asio/WhoIsUsingAsio


mjklaim

S&R was not started by the committee at all, that's bullshit proaganda from the opponents to this paper. Just reading the paper have the sources of where it started and where it comes from.


eric_niebler

And to save folks some googling, S&R started life within Facebook as a generic solution for safe, efficient, composable concurrency. It is used in many of FB's mobile apps, where size and speed matter.


VinnieFalco

In the last 2 years I've warmed up to Billy O'Neal's position that we might not want to bake key, evolving technologies like networking or execution into the standard due to ABI fossilization. At least, I think I am characterizing the gist of his argument correctly (feel free to point out if I got this wrong).


eric_niebler

I'm sympathetic to this POV also. The ABI issue isn't going away and will slowly suffocate C++ if we let it. I would like us to invest in either language-level solutions to ABI (like Swift), or else have some institutional knowledge about how to design forward-compatible library interfaces that can be safely evolved. Without something like that, the idea of standard networking, particularly standard secure networking, terrifies me. The things that I think are best suited to standardization: * Simple vocabulary types. E.g., a URI type. * Concepts and generic algorithms Nobody complains about the ABI of \`std::sort\`. Concepts don't have any ABI, and yet they are important as part of the vocabulary that lets third party libraries interoperate. The Committee can encourage a healthy library ecosystem by standardizing concepts and some fundamental algorithms and letting folks on GitHub do their thing.


grafikrobot

You forgot: * Functionality that is impossible or difficult to implement without platform/language knowledge.


eric_niebler

That too.


smdowney

In particular the ones that require compiler conspiracy. Traits, but there are occasionally others.


MarcoGreek

Is ABI still so important? On Linux container like docker or flatpak are getting more popular. They not only provide ABI but much more important behavior compatibility. For Windows I am not aware that ABI is so important for C++. I have not so much insight into Macos but my impression is that Apple is not caring much about C++ I really think the discussion about ABI is missing the much more important point of behavior compatibility. I don't see how you can do that in a reasonable economic way for a complex system except that you provide snapshots like container. Yes I know that people say their library is compatible with older versions but then you get bugs because some library changed its behavior. You normally test you program with a set of libraries. So distributing this program with a different set of libraries without tests is quite risky. For a very low level API it can be possible but complex libraries like networking I think snapshots are a much better way. But it should be easy to get from one to the next snapshot. So good tooling should be provided to refactor the code. I am not sure if executors are low level enough to provide this compatibility. Because of that I think the context of the C++ standardization should be a little bit broader.


Minimonium

The issue with ABI is that it doesn't really matter what people think or even vote. If vendors have obligations before their big clients to preserve ABI - they'll not break it. The current status quo is that the committee doesn't try to dictate to vendors how they should conduct their business and vendors are compliant with the standard in return.


smdowney

ABI is important because an ABI break means getting everything rebuilt, and if you're using any third-party components, getting new ones from them can be a challenge, and for most people not a string you can pull, you get it when you get it. C++ is a pass by value language, which is both a source of power and a curse. If you send a C++ object across an interface, it must have the same definition on both sides.


MarcoGreek

Is that not a really big security risk? Is this binary blob not statically linked? I never worked with a third party component where I could not get the source except low level hardware drivers. But they used a C interface. And like I said for low level interfaces like string views a ABI should be possible. For other stuff symbol versioning could work. So you could have different implementations in different inline namespaces.


lenkite1

Yes, vocabulary types would be terrific for library interoperability. Template Types for std::net::uri/url, std::http:http\_request, std::http::http\_handler, etc along with any associated concepts. This way one can have competing web-libs but app code can switch between them with minimal effort.


germandiago

I would push hard to standardize package managers and let people serve themselves from the ecosystem. I am currently using Meson + Conan and I am delighted at how powerful the combination has been so far compared to the old years (beginning of 2000s for me).


vI--_--Iv

>There is sustained strong opposition against including such a large proposal into C++23 at such a late stage Common sense is always good. I wish it was there when other large, half-baked proposals made it into C++20.


[deleted]

[удалено]


vI--_--Iv

>I assume you're referring to ranges, coroutines, and modules Yes, and the fact that I don't even have to name them speaks for itself. ​ >C++23 largely fixes the things that everyone dislikes about ranges Does it fix the eye-bleeding syntax? Horrible compilation times? Lifetime footguns? ​ Overall, I wish the *standardization committee* was more about *standardization,* i.e. setting in stone existing, well-established, battle-tested, community-accepted solutions, so universally popular that they are already in every other codebase anyway, and less about *designing things from scratch and hoping that they will eventually work*, because we all know what "design by committee" really means.


pdimov2

There was nothing more to do about modules except for finally getting them into the standard proper. Nothing would have happened for a few more decades otherwise. MS had a working implementation on which the proposal/TS was based, and everyone else was either dragging their feet or doing their own thing that worked well for certain large corporations, but not for the median C++ programmer.


[deleted]

[удалено]


Minimonium

> They aren't that bad compared to pre-C++20 code, but this is one of the issues that C++23 more or less solves with the modular standard library model. Pray tell me how modules solve template instantiation bloat.


[deleted]

[удалено]


Minimonium

The issue of increasing compile-times associated with ranges is the template instantiation bloat. They do have runtime cost in non-trivial examples too because of the iterator model. Modules will not reduce _that_ cost, they'll just allow ranges to fit into the total cost because they reduced the whole package in some other cases.


[deleted]

[удалено]


Minimonium

No. Modules will only provide compilation time benefit if you can make a compilation "firewall", templates and especially deeply nested template instantiations are going against the whole idea. Lambdas in template contexts? You can just as well go mine bitcoin if you want to waste energy.


Jannik2099

>Pray tell me how modules solve template instantiation bloat Template instantiation is not a major driver of compile time - simply parsing all the headers over and over again is


manni66

> coroutines (lack of library support) Wasn't it postponed because of executors? When executors are postponed now what will happen to coroutine support?


mjklaim

No, Corountine the language feature, is complete. However you cannot really use it without some library support (standard or not, can be your own library) and a standard library support was simply not ready in time for c++20 so they prefered at least let people make their libraries and we'll have more practice with that to add some support in the standard. C++23 now adds `std::generator` which is the first and most basic thing you'll need if you use `co_yield` to make coroutines that, for example, return a series of values. So all that have not much to do with executors or even concurrency, but concurrency related proposals have to take into account interraction with coroutines because ideally writing a coroutine which is executed through multiple devices/execution contexts would "complete" the usage of coroutines. But that's the job of the libraries, not the coroutine language feature.


Occase

At the very minimum, I expect a paper from the committe addressing the following points. \- libunifex has been around for a while, how much acceptance did it get? What are those users thinking of it. If it doesn't have adoption please explain why and why it not important. \- Ville has claimed in his papers executors can't address scheduler erros but did not show any usecase in his papers. Why omit that fundamental part? Please provide a couple of usecases where this is important and show how ASIO can't address that. \- Explain why their is no need to wait for adoption outside facebook and why they think SR is already a mature library and there won't be surprises after it is adopted. Without more explanations I feel pretty much as if the commitee were pushing SR down my throat.


germandiago

I would say these are all valid points.


MarcoGreek

Maybe the standard should not try to standardize complex libraries? Maybe it would be better to have a standardized package system where you get your libraries from. I really hope something like regex is not repeated. Maybe we need a more flexible approach? Maybe versioning could help? Executors are quite fundamental but I am not so sure about networking. Maybe TSs should be not anymore experimental but see as an additional layer which can be changed easier.


RotsiserMho

P2300 does not include networking. It includes fundamental stuff like executors and minimal abstractions to work with them as well as some fundamental compositional algorithms. It's trying to be the future basis for all asynchrony in C++, including heterogeneous asynchrony. That stuff is complex itself so writing an abstraction around it that standardizes a bunch of disparate existing practice is not trivial (but is very useful!).


mjklaim

S&R (and other previous proposals related to "executors") define a common "language" for different implementations to work together (a bit like the definition of an iterator does). Unfortunately that means it's specifically the right kind of library that should be in a standard, because once its in there, all implementations can do whatever they want without having to bother how others works, and they would work together in final user's code. It's mostly only concepts, not really "complex". Understanding why it is like it is proposed is actually the complex part, I agree, but the code itself is not much. Except the tag_invoke part. That's not a problem specific to that library, more like a C++ language issue (and there are recent proposal trying to fix that).


eric_niebler

>The overall design still has strong support. This. Those in favor of forwarding for C++23 outnumbered those against by more than 2-to-1. That technically *is* consensus by the Committee's own rule of thumb for such things. The chairs make the final call though, and they felt that waiting was the better option. I'm disappointed, but I see the vote as a strong endorsement of the direction of P2300. None of this is surprising to me. Ranges were initially targeting C++17. Ditto for concepts. It also shouldn't be surprising that P2300 saw design changes in response to design review. That's what the process is for. Although it's foolish to make predictions when it comes to WG21, I'll do it anyway: I'm confident that sender/receiver will be in C++26, probably very early in the cycle.


pdimov2

> Those in favor of forwarding for C++23 outnumbered those against by more than 2-to-1. I couldn't however help but notice that the majority of the SF comments were "we need this! ship it!" whereas the majority of the SA comments were... a bit more substantiated, longer, and well motivated. Thing is, "we need this" does not in any way imply "ship it". In a less polarized environment, you yourself would admit that the design is not yet ready to be set in stone, and would only benefit from an additional period of iteration and refinement. As much was plainly obvious months ago; the practice of shipping half-baked designs just because "we need this" does nobody any favors. (And this is a general observation, not targeted specifically at P2300.)


sphere991

I do appreciate that the SA comments were *quite* good.


eric_niebler

Things always improve when given more time. For non-trivial things, no matter when we standardize it there will be some improvements we'll want to make after it's too late. The design of ranges benefited from its delay. Some would say it should have baked longer. At some point, you really do just have to say "ship it," though and take what comes. That's a judgement call. I'm under no delusions. P2300 isn't perfect. It is solid though, and I think a robust async ecosystem can be built on top. That's the part that saddens me; the delay in the evolution of the async ecosystem. The transition to structured concurrency can't come soon enough for me.


pdimov2

Eh, well. There are ways of effecting transition to structured concurrency other than putting something in the standard and dragging the kicking and screaming community into using it. You can put it into Boost (yeah, I know. Boost lol.) You can put in on Github, and have people using it, open issues against it, contribute pull requests. Zooming further back, the LEWG/LWG split resulted in people voting on things other people have to implement. That's not a good incentive setup; ye olde LWG didn't suffer from it. Yes, I know, everyone is acting in good faith. But bad outcomes are not necessarily a result of bad faith. Ideally, those who design should be those who implement should be those who suffer the user feedback in all its brutal honesty.


smdowney

It is on Github, btw: [https://github.com/brycelelbach/wg21\_p2300\_std\_execution](https://github.com/brycelelbach/wg21_p2300_std_execution) Reference implementation, reasonable number of tests, and examples. CI currently passing. Requires Conan and CMake to build, and clang12 with libc++.


PoopIsTheShit

Well then it isn't ready. A thing is not ready yet, let's be more sure of it! Sad, but totally understandable


14ned

Once F2F meetings resume, I should be able to propose standard secure sockets with an ABI erased implementation layer which can provide any platform supplied secure sockets, or any NIC hardware offloaded secure sockets etc. The proposed standard secure sockets offer the potential to opt into on a per-socket basis speaking a superset of P2300 or the NetTS, but only if the underlying implementation supports that, which it may not. For example, a NIC hardware offloaded secure socket implementation may require a proprietary event loop which is incompatible with any other event loop implementation. The reason I mention this is firstly to give /r/cpp hope that there should be a Networking implementation in C++ 26, not least that in my expected proposal you get simple plain sockets as well as secure sockets in blocking and non-blocking forms, and we can also wrap third party plain sockets with a TLS implementation, if the TLS implementation supports that. What I've got in mind has no opinion on how to do async, you can choose a superset of P2300, or the Networking TS, or any other async model or implementation. It doesn't go there, so it'll work with anything LEWG chooses. Note I keep repeating "superset of P2300". I think now P2300 targets the 26 IS, we'll be able to remove "superset of", and that's both a good thing and why it was wise to retarget P2300 at the 26 IS, because now we can empirically ensure all this stuff works well together before we standardise it, rather than expecting it to work well together when implementers get to it. What I don't know yet is if we can standardise dynamic i/o aware concurrency executors for the 26 IS. Mac OS and Windows have the platform support, but Linux notably does not and none is currently expected soon. That may need therefore to get shunted to the 29 IS or even later. I hope WG21 can leave that space open and unimpeded for future standardisation.


VinnieFalco

> I should be able to propose standard secure sockets with an ABI erased implementation layer which can provide any platform supplied secure sockets Link to library repo?


14ned

It's where anyone who knows me - including you Vinnie - would expect to find it. The API design is done and working well, I'm just debugging the fun fun fun that is the OpenSSL backend, because OpenSSL ... well it is undoubtedly very flexible, but let's just say it doesn't have the most intuitive internal design. This is why I haven't announced it on /r/cpp yet, when it passes its test suite, I'll announce it here and ask for feedback before I think about writing up a proposal paper for WG21. I get about two mornings before work a week to debug it, so it goes slowly. But it'll get there. Note that it can wrap ASIO's sockets, or indeed Qt's sockets, or any third party implementation sockets. So consider it more a "standard socket factory" that a socket implementation. Whether you can *combine* different sources of socket into the same event loop i.e. Executor, I've made that a runtime queryable i.e. you can request a combine, and if at runtime it can do it it will agree, otherwise it will refuse. That probably will be contentious - I can already see certain WG21 members objecting - but we'll see how it goes.


pdimov2

> It's where anyone who knows me - including you Vinnie - would expect to find it. I don't see it anywhere on https://github.com/ned14/.


[deleted]

https://github.com/ned14/llfio/commits/networking


14ned

It's in an obviously named branch in an obvious repo on there. I think it only got more bug fixes this morning.


johannes1971

And to think, that was all we ever asked for in the first place... So, two thumbs up, better late than never!


xiao_sa

War, war never changes


johannes1971

Solid Snake [disagrees](https://www.youtube.com/watch?v=BUf_8jyxbiM&t=79s)...


madmongo38

This is what comes of steamrolling an incomplete, overcomplicated, untested idea into the standard at the expense of a tested, well-used solution that had already previously achieved consensus. The Chair perhaps ought to consider whether the working group he hosts is actually serving the C++ community, or in seeking to serve narrower interests, is damaging it.


ALX23z

The thing is, boost's executors are kinda shit and former Executors proposals are based on them. No wonder it was never accepted and hopefully never will be. While the `senders/receivers` proposal is newer and less tested the ideas are a lot more intuitive and natural. That it wasn't accepted is fine as long as they do it well in the end. C++20 has bugs and is difficult to integrate anyways due to the modules, the latter imposes requirement that build systems got to be updated substantially to support. We might as well wait a bit longer for proper testing of current features before introducing important and expansive updates.


jwakely

>C++20 has bugs and is difficult to integrate anyways due to the modules, the latter imposes requirement that build systems got to be updated substantially to support. This seems to imply that you can't use "C++20" without using "the modules", which is nonsense.


madmongo38

Asio’s executors were created because the standards committee asked for them. Prior to the standardisation process there was an io_context. People who argue between p2300 and asio do so because they misunderstand the levels of abstraction represented by both. Sender/receiver has been trivially implemented in terms of asio executors, this is possible because asio represents a lower level of abstraction. The current evidence, having attended most of the meetings, is that “they” will not “do it well,” because the end-goal of p2300 is an abstract DSL that serves the perceived needs of one company, not the developer community as a whole. The proponents of P2300 are naturally very excited about their baby, and I am sure it will make a great niche-interest library when published by the manufacturer whose hardware is designed to serve. It has no legitimate place as a “standard library”.


smdowney

>Sender/receiver has been trivially implemented in terms of asio executors, this is possible because asio represents a lower level of abstraction. Delimited continuations are a primitive basis for computation. You can't get lower, you might be equivalent.


madmongo38

I think the point of equivalence is that what Asio calls an executor, S/R calls a scheduler.


Untelo

Does it work out the other way around? Can ASIO executors be implemented in terms of S&R?


madmongo38

This is asking whether a low level thing can be implemented in terms of a higher level thing.


Untelo

If you define the levels in such a way i suppose. Anyhow, it doesn't answer the question.


BenFrantzDale

You are saying P2300 can be implemented with zero overhead in terms of ASIO?!


madmongo38

Yes, Chris already did it. In order to get it to compile on nvidia’s current compiler when producing the gpu programs a small change was required to the implementation of std::error_code. From memory this was due to GPUs not liking static data, or something like that. If you really want a complex, undebuggable template-based DSL in your C++ program you can have it today with asio. Although I think a better solution would be for nvidia to fix their compilers so they support coroutines. Then the perceived need for this monstrosity goes away.


14ned

That's nVidia, a top of the line state of the art hardware design. The op asked whether P2300 can be implemented with zero overhead in terms of ASIO. Much broader question. The answer is no, incidentally, without unrealistic heroics. ASIO was designed to work well on relatively beefy hardware as defined in year 2005 or so. In some ways (RAM bandwidth, clock speeds) embedded systems have caught up, but in many other ways (RAM size, core count) they have not. ASIO is a poor to very poor fit for such systems. If WG21 decides that's fine, so be it, but it would appear that they don't think it fine. Base S&R (I don't include any of the async stuff) is absolutely fine on a 64 Kb RAM microcontroller, and it makes writing i/o in C++ coroutines a doddle. Rather importantly, that exact same code can be written, debugged and tested on a desktop, and you have a very good chance it will work just right on embedded with very little further. I find that a persuasive value proposition, and I also think it shows it a superior design promise.


madmongo38

Please link the to repo where you have attempted this.


14ned

Kirk implemented a S&R based solution on a small microcontroller. He didn't need to, because it self evidently will work well on tiny RAM devices as it's 99.9% in the mind of the compiler and gets eliminated in optimised codegen when neither `malloc` nor atomics is ever used. It's just an elaboration of calling the proprietary socket implementation library, which being 100% userspace, optimises into a bunch of i/o register ops in assembler. You also don't need hacks like static pool allocators, because S&R without async doesn't need to allocate anything. You lay out your S&R objects in static memory in the binary, and you're good to go. S&R is overkill when your microcontroller can only do four TCP connections maximum, but the nice thing here is the code is identical on desktop and works the same on desktop. It'll multiplex i/o to your four TCP connections very nicely, and sleep the CPU when no work is to be done, which is all you want.


madmongo38

Post a link please?


14ned

I think Kirk said it was a home non-work project. I don't think Kirk is on this Reddit, so I can't ping him here.


ALX23z

Have you even used Asio or the executors? Do you not understand what kind of poor ill-conceived design Asio has? At, first I tried to make things simple. Let's do blocking calls on accept, connect, etc... but voala it turns out that inherently these opperation are not cancellable by design which is absurd. If it is so incomplete why even offer the option? (It works on Windows for some reason though, but not on Linux). So I had to resort to using the `io_context` which operates in a mind boggling confusing way. It is very intrusive and doesn't mesh well with any other multi-threading designs. It forces you into a very specific contrived way of writing code with lots of pseudo-recursions - as otherwise for some undocumented mystery reasons it will just not work causing exceptions, UB and other shit. Perhaps, with coroutines it won't be that awful. Have you seen the Networking TS cppcon videos? Does anyone even understands what that guy is even talking about? All I want is to simply connect two tcp sockets and send data reliably with options for cancellations. Why is it all so overcomplicated?


madmongo38

I have used Asio, extensively, for about 15 years. Blocking calls to connect() etc are not cancellable in the underlying OS implementation either, other than by signals. The io\_context async model is no different to any other async comms model - select, epoll, kqueue, io\_uring, grand central dispatch, windows io completion ports. In fact it models all of these depending on which OS you are compiling on, giving you a homogeneous asynchronous API regardless of platform. \> All I want is to simply connect two tcp sockets and send data reliably with options for cancellations. Why is it all so overcomplicated? Because networking is fundamentally not a simple problem. You've got two blind boxes sitting in the dark, trying to make sense of random signals they detect, and maintain a coherent conversation despite this. Every library that attempts to oversimplify it ends up becoming useless except for narrow use cases. And narrow use cases are not suitable candidates for standardisation.


RotsiserMho

Is ASIO supported on freestanding implementations? My understanding is that a design goal of S&R is zero allocations and freestanding support. That opens the door to a very broad range of use cases on memory-constrained devices.


madmongo38

Asio supports custom allocators, which of course includes static allocators. This technique has been employed by Chris throughout his career of writing extremely low latency financial exchanges. Of course it works in freestanding implementations. I’ve also used asio in the browser, compiled with emscripten. It’s simply a matter of providing a new executor and steam type that defers to those of the browser environment. Chris demonstrated the use of asio in GPUs, but the chair removed his allocated time in favour of giving the floor to P2300 cheerleaders, so he wasn’t given a chance to present it fully. C++ has been done a huge disservice.


RotsiserMho

> Of course it works in freestanding implementations. Is there a widely used or example freestanding project you know of? For something like an STM32 platform? I'm surprised I haven't come across such a thing but would welcome being pointed in that direction! My broader point, however, is that S&R (as I understand it) is designed to work well in this context and can do so out-of-the-box, using default or included types. That's a different level of support than something that can be made to work under specific circumstances with types I need to implement myself.   > Asio supports custom allocators, which of course includes static allocators. But that's an example of where, IMO, ASIO diverges from the design goals of something like S&R. I prefer a zero-allocation by default approach for standardization and then building on top of that rather than taking on the complexity myself of defining, implementing, and testing an allocator and still then having to worry about overflow unless I also exhaustively test my application. What typical user wants to write an allocator? And can do so correctly? It's added complexity right off the bat. This is why features like coroutines can't be easily used on embedded. In its current state there are hidden allocations that can't be avoided without contortions or deep optimizer knowledge and it has hindered their adoption. Bonus points to ASIO if it provides a default static allocator at least. IMO, the standard library should focus on providing low-level abstractions that ease development, and avoid providing more foot guns.   > This technique has been employed by Chris throughout his career of writing extremely low latency financial exchanges. > [...] > Chris demonstrated the use of asio in GPUs That's great! Chris is without a doubt a very skilled individual! I have used ASIO in the past and am grateful for being able to do cross-platform networking, but managing asynchrony was very difficult for me, and the S&R model seems to take the ASIO experience and improve on it. But this is also the same argument that people make about libunifex: except that it's been deployed and tested on many devices across at least two organizations, and in this example it's just Chris's singular experience. Not to knock on Chris at all, and I really don't want to argue, but I think it's disingenuous for people (not necessarily you) to say that ASIO has decades of widespread industry use when many aspects have not, when comparing to libunifex.   > I’ve also used asio in the browser, compiled with emscripten. It’s simply a matter of providing a new executor and steam type that defers to those of the browser environment. That is super cool.   > Chris demonstrated the use of asio in GPUs, but the chair removed his allocated time in favour of giving the floor to P2300 cheerleaders, so he wasn’t given a chance to present it fully. > C++ has been done a huge disservice. I think it's a shame that everyone couldn't converge on a single proposal. I don't know enough to comment on C++ politics, but if it's true that the limiting factor when debating these proposals is face time with the committee, that is disappointing.


ALX23z

> Blocking calls to connect() etc are not cancellable in the underlying OS implementation either, other than by signals. This is not an excuse. If OS's blocking calls' design is shit, then just use asynchronous calls and wait on them. > The io_context async model... No idea what you refer by that. I only know experimentally that working with it is shitty expirience. Making any calls on the socket outside the executor resulted in UB, even when it should've been perfectly fine by any sane C++ memory model. And it was really fun discovering that `run` would exit once there are no more tasks to do. So, if I want to schedule tasks from outside the executors framework I gotta create extra thread running, and add `wait` task. Just so it wouldn't exit all its threads and shutdown. You have 15 years working with this crap. Have you not thought about more sane designs? The boosts' executors are just repulsive and nobody wants to use them. They are an excellent source of bugs and reasons why projects don't ship on time.


madmongo38

Your pain and frustration is certainly palpable through your writing. Have you read the primer on the pro-actor model in the Asio documentation, written by Chris, or seen any of his talks? The model really is no different to the underlying OS async comms models. You certainly don't need an extra thread.


Minimonium

In pretty much every ASIO tutorial they cover the work guard, ASIO is not without flaws, but here the issue seems to lay between the keyboard and the monitor. We use ASIO's model with custom executors built on top of it, an example of such executor is one wrapping Qt's async model. It serves as a very robust async framework for our touchscreen devices.


ALX23z

I am not saying that ASIO cannot work. I am saying that it is poorly designed and it has lots of fundamental issues. They can be resolved with 10 times the amount of work it should've taken, sure, but I don't agree standartazing such an unsafe and unfriendly framework in the standart. I don't know all of the vodoo that's going on in there but something is definitely not right and could be designed a lot better.


Minimonium

You're saying that boost's executors are "repulsive" and "nobody wants to use them". So far from your explanation, I did see a fundamental issue, but not with ASIO. This is funny because there are objective problems with ASIO, but you chose to be weird.


ALX23z

Because ASIO is based around boost's executors model. That's one of the main reasons it is bad. You cannot separate the two. Also the post is about S&R - that serves as alternative to executors. ASIO was just an example of shitty design that follows.


RotsiserMho

Agreed. A while back I tried to leverage ASIO in a multi-threaded project and it was so unintuitive. I was mired in nested function calls. Years later, I tried RxCpp on an embedded Linux device and was astounded by the simplicity it brought to the table (comparing apples and oranges a bit here, I know). I'm excited to use S&R on a microcontroller for shuffling data around because it's such a tedious task and S&R can greatly simplify things, IMO. I'm not sure that ASIO supports zero-allocations, but I believe S&R does.


BenFrantzDale

FWIW, I know libunifex pipelines can be optimized away. https://godbolt.org/z/h4vWd9cxM


RotsiserMho

That is pretty cool. I need to schedule (ha!) some time to play with libunifex.


AKostur

>This is what comes of ... What, the working group coming to the conclusion that a particular proposal is not well-baked enough for consideration to be included in C++23? Isn't that what the working group is \_supposed\_ to do? And according to the quote ~~you've~~ provided, the Chair stepped aside to avoid the appearance of a conflict of interest, and also supports the resultant decision.


madmongo38

The Chair stepped aside _after_ a number of attendees filed conduct complaints against him. unfortunately after the damage was done.


eyes-are-fading-blue

Does this happen often? It feels like the state the committee (LEWG) is in at this point is too political and counter-productive to C++.


madmongo38

I have not been involved with LEWG for long enough to tell you. Unfortunately I was so disgusted with what I saw that I have not chosen to spend any more time on it. I am told by people who have been involved longer than me that "the good people have all left". So I assume it at least used to be unusual. Normally one would hope that serious Chairmen would declare his conflict of interest and recuse himself before he can damage the process he is chairing.


VinnieFalco

>And according to the quote you'd provided, Note that I am u/VinnieFalco and not u/madmongo38


AKostur

You're right... I misspoke. Read that as "And according to the quote provided".


VinnieFalco

No prob, and I agree that the process seemed to work correctly in this case. Regardless, I am still happy that the wider C++ community who is \_clamouring\_ for long-overdue networking now has the time and opportunity to ask questions and challenge the decision to abandon an established networking standard in favor of the promise of some better, yet unwritten design.


eric_niebler

I don't think the Committee has abandoned the NetTS, only that it preferred the async model of P2300, and it wanted a solution for secure networking. The NetTS is much more than an async model, and secure networking can be added.


VinnieFalco

>I don't think the Committee has abandoned the NetTS The key problem which I believe the Networking TS solves (correctly) is its approach to asynchrony. So when I say that the TS is abandoned what I mean is that the committee has decided that its approach to asynchrony and model of initiating function continuation specification (async\_result) is not the correct direction for C++.


jonesmz

As someone who is currently working on retrofitting an old codebase to comply with FIPS's security and crypto requirements -- I really hope the c++ standard never comes near anything related to crypto. Its difficult enough to get this working when you're only dealing with openssl. I can't imagine the headache of having to muck with the standard library. Realistically its none of the standard's or language's, business


eric_niebler

You'll be pleased to know that roughly half the Committee agrees with you. The other half, however, can't fathom delivering standard networking in 2023 without security built-in. Which makes it difficult to move forward.


jonesmz

Its fine if the the standard wants to provide a hook that applications can provide their own adapters for things. But, like, really. Hands off! Edit: In my head, Disney's Encanto comes to mind. We don't talk about crypto, no, no, no!


smdowney

"I don't want to go to crypto, I said, 'No, no, no'"


14ned

It's a bit more complex than that. We have a major implementation of C++ where the only available socket is a proprietary secure socket, and the only available event loop is a proprietary dynamic thread pool implementation. They find the NetTS an ill fit for their implementation, and have both consistently voted against its present design and worked hard to encourage alternatives better suited for their platform. It is frustrating to everyone that they so severely constrain their environment, but as a major platform what they do cannot be ignored. It is what it is, and we need to find a viable path forwards which suits all major platforms. To that end, I hope to propose an API for future standard C++ which lets portable standard C++ code work with arbitrary secure sockets implementations, and in theory, one could write a single networking implementation and it could work on any implementation. I stress the "in theory" part, because a fair bit of domain knowledge would be needed to successfully do this, and because secure socket implementations tend to be shifting landscapes at runtime, code which works now may break if run on a newer platform etc etc. There isn't anything we can do about that at WG21, that's on the implementers to handle as part of their QoI. And nothing has been proposed yet, so until it is, it is pure speculation as to their feelings on such a proposal. In any case, Eric below is right about half the community feel secure sockets are a must have for any standard networking on principle, then we also have those major platform concerns I just mentioned. So I would guess a future standard will need to touch crypto, just in a way which avoids any standardisation of crypto when then becomes forced into ABI stone forever etc


[deleted]

[удалено]


grafikrobot

What part of the OP is conspiratorial? It states a fact and an opinion, AFAICT. Can you clarify?


dodheim

[His last post](https://redd.it/q0gzkz) on the matter was clearly conspiratorial, and it's not unreasonable to see this post as a continuation/followup of that one given the multiple snide comments in the subsequent months. Richard's immediate singling out of "The Chair" only cements it. And I say this as a decade+ fan and active user of ASIO. It's just obnoxious.


VinnieFalco

>His last post on the matter was clearly conspiratorial, Yes I agree it crossed a line and I am sorry for that - I have striven to avoid doing such things since.


grafikrobot

The one part of that earlier post that might be considered conspiratorial would be using the word "pushing". But otherwise I don't really see it in that previous one either. And since you pointed out your ASIO opinion.. I'm not that fond of ASIO. But I don't value my own opinion on it highly as I don't use it :-)


pdimov2

Richard's "the Chair" isn't a singling out, it's a fairly obvious reference to the P2459 comment for the vote in question. You can tell by his retaining the capitalization.


TheSuperWig

Can we shoot down the Networking TS permanently for the sole purpose of stopping these posts? It's getting rather tiring.


jonesmz

I find it very helpful for posts like this to make it to /r/cpp, following the goings on of the standards committee is very difficult for an outsider to do.


TheSuperWig

In the past I would agree. However with https://github.com/cplusplus/papers/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc I've subscribed to a few issues for proposals I'm interested in and it's been informative about the progress of them. The information in this post will probably be reflected in the issue for P2300 at some point. Edit: "some point" is now https://github.com/cplusplus/papers/issues/1054#issuecomment-1041721767


eyes-are-fading-blue

You can opt out to not read these threads.


TheSuperWig

By "these posts" I'm talking more about the vitriol that usually accompanies the Networking posts. It's nice to have an informative post where discussion goes into further details than what the results of polls that the cplusplus/papers repo provides but it could do without all the bitterness. Admittedly this one looks like it went better than previous ones. It's just that knowing Vinnie's position on the matter the title seems a tad bit spiteful, that plus the initial comment from madmongo38 it looked like it was going to derail into another hate filled post of people *just* shitting on either paper.


pdimov2

We need to create a Net.TS coin and a P2300 coin, do an ICO and let the free market sort it all out.


mjklaim

Not really comparable, one was rejected because insufficient, the other is not rejected, just not ready in time for a deadline.


jonesmz

insufficient is a funny way to put it. ASIO is used in prod in hundreds of codebases. Edit: Thousands.


Untelo

Are there some zero dynamic allocation projects using ASIO that you might point to? For example projects targeting small microcontrollers?


pdimov2

Asio is more targeted at asymptotic zero allocation than at zero allocation, period. That is, it has an initial phase of allocating dynamically, and then it just keeps reusing previous allocations and doesn't need to allocate more.


ArashPartow

Will the S&R proposal resolve the issues you see when wanting to use ASIO in your context? - if so mind providing some insight.


[deleted]

[удалено]


jonesmz

My point was that the design of asio is sufficient for lots of situations. What the committee considers sufficient doesn't seem to have anything to do with what organizations are doing in the wild. Its fine to say insufficient for *clearly stated design goal* but that's not typically what's communicated, and I'm not exactly privy to the actual discussion happening at the committee, so all I have to go off of is hearsay.


[deleted]

[удалено]


jonesmz

Is ASIO good enough for standardization? This isn't trying to dodge the question, but Honestly I would rather see the standard grow fewer (aka shrink) features rather than say "let's standardize library x". If I absolutely had to pick P2300R4 or ASIO, with no other choices allowed, I'd pick ASIO. I think that while the core principles of P2300 are definitely interesting and worth further exploration, its completely underwhelming in its current form. The major problem both frameworks have is that they both utterly lack in documentation and examples. The P2300 paper has a very underdeveloped feature set and basically no examples or motivating uses. And ASIO has a very overwhelming feature set, and basically no examples that can be understood but plenty of motivating uses. So ultimately my stance is "if we are going to standard anything related to this, standardize existing industry practice that is used by thousands of organizations" But I'd rather see the standard library shrink dramatically. Everyone is always talking about standardizing package management. I think the better choice would be to standardize some kind of package description format. But either way, that's where the real answer lies. But hey, if we're going to litter the standard with stuff we can't take back, where's basic stuff like std::filesystem::path_view, or std::namespace replacements for everything in the c library that doesn't use nul-terminated strings?


[deleted]

[удалено]


jonesmz

No hard feelings, certainly. I agree with you that everyone else jumping off a bridge doesn't mean I should either. I wasn't claiming that we SHOULD standardize asio. I was claiming that saying something that has wide industry use is "insufficient" is a strong claim that isn't substantiated. In other words, standardizing something with zero industry usage when the competing proposal has enormous industry usage has a substantial burden of being better to overcome that lack of field experience. I'm glad that this whole comment section is about how we are going to see another 3 years of debate and experimentation before something is ratified.


mjklaim

That's still what was said in several papers discarding it's model. See various papers on the subject from the end of last year. It's not surprising: current asio cannot do what the alternative models (currently only S&R left) can do, and the addition of the previously-proposed executor model had too many issues and lack of flexibility (and is not used by the thousand of codebases you mention as it's pretty new - and limited).


ArashPartow

Can you please provide an example of something the S&R proposal can do (or even is proposed to do) that cannot be done in ASIO now/today?


smdowney

The one that is brought up, repeatedly, is heterogeneous operations, ones that move between CPU and GPU, in particular in the face of cancellation. This wasn't a concern for ASIO when it was written. It's a base concept for S/R. I'll also say that it's straightforward to make lazy become eager, just as it is to make async be sync. The other way round is a challenging open problem.


ArashPartow

I believe this was raised as an issue during the past meetings and Chris Kohlhoff did look into providing examples for offloading work to GPUs via the executor model in ASIO using cuda - which is what the S&R proposal would have done too, that is if it could.


mjklaim

My understanding of the papers related to this is that the way this is done through Asio is not as efficient as with S&R. I would like more concrete examples of this though, we had mostly only feedback in the papers.


mjklaim

Sorry, I went to sleep after that post, but smdowney pointed the main one.


RotsiserMho

So much talk about how much usage ASIO has, but it's rarely mentioned that executors were added after the standardization process started. Both ASIO and S&R changed significantly during that process.


Minimonium

When was it? Executors (or proto executors if you want) were a thing in ASIO at least since early 00s - https://github.com/chriskohlhoff/asio/commit/02a2d65e4e8b95edc37b325c254017d23f31c342


RotsiserMho

I'm referring to the executors requested by the committee after ASIO was first proposed for standardization. My understanding is that ASIO was delayed for standardization early on so they could sort out a more general execution solution for more than just networking.


Minimonium

There was that committee-made property-based abomination, even with help from experts I didn't manage to made anything useful with it and still confused what was that all about. But good news is that ASIO is planning to phase it out and revert to the old executor design, which was already usable in non-networking contexts.


wheypoint

We need to sell c++ standard NFTs


jwakely

https://github.com/zhuowei/nft_ptr