diff --git a/meetings/2024-06/june-11.md b/meetings/2024-06/june-11.md
new file mode 100644
index 00000000..8be05b05
--- /dev/null
+++ b/meetings/2024-06/june-11.md
@@ -0,0 +1,1552 @@
+# 11th June 2024 | 102nd TC39 Meeting
+
+**Attendees:**
+
+| Name | Abbreviation | Organization |
+|-------------------|--------------|--------------------|
+| Keith Miller | KM | Apple Inc |
+| Ashley Claymore | ACE | Bloomberg |
+| Jesse Alama | JMN | Igalia |
+| Waldemar Horwat | WH | Invited Expert |
+| Jason Williams | JWS | Bloomberg |
+| Daniel Ehrenberg | DE | Bloomberg |
+| Duncan MacGregor | DMM | ServiceNow |
+| Bradford C Smith | BSH | Google |
+| Agata Belkius | BEL | Bloomberg |
+| Sergey Rubanov | SRV | Invited Expert |
+| Matthew Gaudet | MAG | Mozilla |
+| Richard Gibson | RGN | Agoric |
+| Chris de Almeida | CDA | IBM |
+| Daniel Minor | DLM | Mozilla |
+| Chip Morningstar | CM | Consensys |
+| Philip Chimento | PFC | Igalia |
+| Michael Saboff | MLS | Apple Inc |
+| Mikhail Barash | MBH | Uni. Bergen |
+| Justin Grant | JGT | Invited Expert |
+| Christian Ulbrich | CHU | Zalari GmbH |
+| Tom Kopp | TKP | Zalari GmbH |
+| David Enke | DEN | Zalari GmbH |
+| Shane F Carr | SFC | Google |
+| Chengzhong Wu | CZW | Bloomberg |
+| Samina Husain | SHN | Ecma International |
+| Jordan Harband | JHD | HeroDevs |
+| Jonathan Kuperman | JKP | Bloomberg |
+| Istvan Sebestyen | IS | Ecma International |
+| Aki Rose Braun | AKI | Ecma International |
+| Romulo Cintra | RCA | Igalia |
+| Luca Casonato | LUC | Deno |
+
+## Welcome
+
+Presenter: Rob Palmer (RPR)
+
+RPR: Do we have any volunteers for note editors to assist the transcriptionist? Any volunteers at the moment? Okay. Dan has his hand up in the room. As does Jesse and Shane. We are off to an excellent start. I am glad to see eagerness. At the end of the top effect, I would ask speakers to wind up in time so you have enough time as the presenter to state what has been said in a summary form. So we can capture that.
+
+DE: I want to emphasize this point about both the summary, keep point to the presentation and discussion and the conclusion being very critical of all items that are discussed. I want to discuss with the chairs. We do not cut off that topic, in fact we could – if we need to, cut off discussion so we have enough time allocated for documenting the summary and conclusion. We consistently have an issue where moth topics don’t have a summary and conclusion by the time the meeting is over. It makes sure it’s accurate, makes sure we agree on it together as a group.
+
+RPR: In general, that’s going to mean we will end topics 3 or 4 minutes before the ending because we have got a very packed agenda with no real – no visible wiggle room. So everyone please be prepared to finish up even if you haven’t achieved your objectives, 3 or 4 minutes early. Okay. The next meeting is at the end of July. It’s a remote meeting. I think in – on the West Coast time zone.
+
+RPR: All right. Let’s start up with some housekeeping. First of all, can we mark the previous meetings’ minutes now as approved? Silence means yes. Yes. We can.
+
+RPR: Any objections to adopting the current agenda? I am hearing no objections. So that is done.
+
+## Secretary's Report
+
+Presenter: Samina Husain (SHN)
+
+- [slides](https://github.com/tc39/agendas/blob/main/2024/tc39-2024-026.pdf)
+
+SHN: Thank you to the host for setting up this meeting. I can just see from the camera you have got a great turnout. It looks good. It’s been a while since I’ve been to Helsinki and it’s a beautiful city. I hope you have good weather and enjoy the city in addition to this meeting. I also want to thank Eemeli for organizing the social event. You will be doing that this evening. Enjoy that and I am sorry that I am not there. I would have very much enjoyed being in the meeting and the social event. I am recouping. My hand doesn’t allow me to travel right now.
+
+SHN: I want to thank DE for making the comments he did regarding the summaries and conclusions. Yes, the summary and conclusions are relevant. It certainly should be something short and relevant. It should be something that enables anyone who reads the technical notes at a later time to quickly understand what the discussion was, the main points and the resulting actions. I know we have had a lot of conversations about summary and conclusions. We may discuss at a later time, or you all may give me a better idea of how you prefer to do it. Whether it is just one paragraph or 5 bullets that covers the summary and the conclusions or whether you want it separated. I leave that to your discretion. But the summary should capture the objective and the main topic that was discussed and the conclusion even more relevant is the agreements you have made, the resolutions, the next steps to take, if you have made (a proposal) go to the next stage or (if not) what the problem may be. I think that – I hope you do. It has been happening, but make sure it happens constantly. I would like to share my screen with my slides. Just give me a second.
+
+SHN: So I hope you can see in front of you the secretary’s report. I also want to recognize AKI, she has joined the ECMA secretariat to support TC39. So please welcome Aki and support her as she does all the work that she has been doing to make sure we are efficient. Thank you Aki and you’re on the call and on the West Coast. It’s some hour in the night or in my case, early morning. Thank you for being awake.
+
+SHN: Just a short overview of what we will discuss today. I will talk about the usual approval process that we do every year with the success of the two standards. A bit about projects, members, and the work that Aki has done together with AWB on a solution and some options for a PDF version. The next on the slides have general information for you. There’s been some information there on invited experts and that – that category and voting/not voting , our code of conduct, of course always important. Let us all work together in a friendly and open manner. I think RP already mentioned that. And some documents that you can access if you wish and, of course, in the next meeting.
+
+SHN: All right. Let’s just start. In light of time and a busy agenda.
+
+SHN: Please at any time, interrupt me. I can’t see the chat. Here are the timelines. The 60-day opt-out and 60-day review. ECMA262, that was closed early. So that’s done. We have not received any issues on IPRs as I imagine you haven’t or the chairs. That seems to be moving smoothly. We look forward to approval of that in the June meeting. That’s the 15th edition.
+
+SHN: And for ECMA402, that came just a little bit later. So it is still open. The opt-out period and the review publication period ends on the 24 of June. I am confident that that will go smoothly. We haven’t had any issues or contentions regarding that. Thank you for all the efforts. It’s again excellent that we have got another edition for this coming discussions at the June GA meeting.
+
+SHN: All right. Some new projects. I want to highlight, TC54 was formed. The new standard is going to be the software bill of material specification. It may have a number associated with it. The TC54 team has been working to have this complete document reviewed and agreed with the technical committee. It does come from a working group. It’s gone very well. If you want to hear the recordings they are on YouTube. If you want to see the agendas or the minutes of the last meetings they are published on GitHub. Some have joined the call. We have a strong group involved at TC54. We have a couple more meetings to see we have the document finalized, and hopefully they will be ready for approval at the GA. We’re moving forward with that.
+
+SHN: A new proposal for TC55. That’s a collaboration with W3C and WinterCG group. This is an open conversation with the Ecma ExeCom and will be discussed during the GA. We have had a reviewed and revised scope and mode of operation. It will be similar to what we did with CycloneDX. It also will have some discussions during the GA and we will aim to move that forward with TC55 and starting a new technical committee. Thank you for your efforts. Some of you on this call are also on that.
+
+SHN: New members, JetBrains is currently an invited expert. There’s one or two people here from JetBrains. They are welcome as invited experts. They will get the paperwork to me so we hope to have them as a new member in June. The other members, welcome. They are already members and will be officially approved and we will have a short press release on the website to welcome the new members.
+
+SHN: The PDF. We have discussed quite a bit over the last while about having a PDF version of 262. We knew that was a very difficult document to put together. It took a lot of time. Allen supported us for two years. This year, together with Allen and Aki – and she will comment a few statements after I talk about the slide – they have gone through the document and prepared a PDF version. I think they are finalizing. It’s almost ready to go. What is listed are options that could have been potentials for a tool. We have done a review and spoken to the editors also about it. What I have put there in the square box is the recommended tool. I am just beginning discussions with them to negotiate licensing. But I also wanted to bring to your attention, a number of tools have been used and all the feedback by the editors, specifically MF’s from the last TC and last plenary and comments were taken into account and so far it’s looking quite good. I don’t have an answer on the – on the – the licensing agreement yet. But I have just made email contact and they’re in Australia or the individual calling me is from Australia. We will have some phone conversations very soon. Aki, would you like to give an update on your efforts and your recommendations on prints? Aki?
+
+AKI: I know most of you don’t care about the PDF and wish you don’t have to hear about it. There are no good options for creating a document – a PDF document out of HTML website, even though the CSS page media has existed. Prince is compliant. That’s why. So we’re hoping to use that connection to both get our license clarity and maybe even get them to update their JavaScript implementation and join us. We will see. But yeah. The way I have it set up or will, by the time the GA, in the future, the editors should be able to just run one script and have a PDF. And not have to worry about tables being cut off or notes cut off or missing information. It will just work.
+
+SHN: Thank you. And I hope that we will get to that solution. I know you’re working hard on it and currently with both 262 and 402, you will be doing the run through that. So we will know where that stands.
+
+SHN: All right. That is the end of my presentation. Since our last plenary, these are some of the efforts that have been going on. In the next, there’s some general information. I bring your attention to the documents. They are numbered. If you have any comments. Don’t hesitate to contact me with any input. If they’re not useful, also give me input and we will find better solutions that are relevant for the committee to look at. You have our dates there and just keep in mind the dates as you build your meeting dates for TC39 we don’t have any conflict. We typically don’t and spoke about the summary and conclusions. I look forward to seeing some of that continue to go with the next technical note that is being finalized. And that is the end. I will stop there. Are there any questions?
+
+DE: So thank you so much for looking into the PDF issue, we previously heard from Michael that the tools they evaluate prior, that they needed manual work. TC39 recommended we find a provider for this. How have we validated that Prince wouldn’t require that work, or otherwise, should we make sort of a backup plan to make sure that this one to two weeks of work is staffed and we don’t rely on Allen again?
+
+AKI: I am doing that work.
+
+DE: You’re doing that work? Perfect.
+
+AKI: I am doing that work because I am using something that is spec compliant. In the future, you should run the script and the PDF. What is different this time, hopefully we don’t have to do the same thing over and over, taking a week or more over and over again
+
+SHN: That is the intent, it should not be taking that much effort from others. Aki is ensuring there is a script. For the next one coming up in two weeks, certainly that will be done. We have some time to ensure that this new solution, assuming it works well with the licencing, with Prince, we can have the script and it’s straightforward to use without weeks or extra hours of effort. Certainly if we have a situation that cannot be handled, we will know quickly and make sure it’s not imposing weeks of effort, as in the past.
+
+DE: Well, thank you Ecma so much for hiring Aki to do this important work, and Aki for doing the work.
+
+### Summary / Conclusion
+
+A reminder on the “Summary and Conclusion” for the technical notes was provided. We should have a format which ensures that anyone reading the summary and conclusion can quickly understand the discussion, main points, and resulting actions. Summary captures the objective or main topic of the discussion. Conclusion includes the agreements, resolutions and next steps.
+
+### Summary
+
+Overview from the secretariat was provided, Ecma approval pending at GA 26-27 June 2024 for: ECMA-262 15th edition – ECMAScript® 2024 Language Specification: Status Closed: Opt-out review period 20 February 2024 to 20 April 2024. And ECMA-402 11th edition – ECMAScript® 2024 Internationalization API specification: Status Open: Opt-out review period 24 April 2024 to 24 June 2024
+
+TC54 was recognized as a new activity, and the anticipation of a new standard on CycloneDX Bill of materials specification. Proposal for new TC55 in collaboration with W3C WinterCG is under discussion.
+
+New members were recognized: Replay.io (SPC), HeroDevs (SME) and Sentry (Functional Software) (AM).
+
+Recommended solution for ES2023 PDF version is Prince. The recommendation has taken into account comments and investigations from the committee.
+
+### Conclusion
+
+Further information on Prince will be provided at the next TC39 plenary.
+
+Dates set for future Ecma meetings: GA 129: 25-26 June 2025 GA 130: 10-11 December 2025 ExeCom: 9-10 April 2025 ExeCom: 8-9 October 2025
+
+## ECMA262 Status Updates
+
+Presenter: Kevin Gibbons (KG)
+
+- [spec](https://tc39.es/ecma262/multipage/)
+- [slides](https://docs.google.com/presentation/d/1tnLYexHxk1ygOkn_qevZHS1-MYggF258LNKvCdI0y6M/edit)
+
+KG: Good morning, all. This will be an extremely brief update, in part because it is currently midnight-thirty.
+
+KG: So we have only two normative changes that we find from the previous meeting. From Ross (RKG?), to fix the final piece of a longstanding web incompatibility in the spec, where what was specified didn't actually match what engines were implementing, and engines did not want to implement behavior in the spec because it would slow down a bunch of stuff for no particular benefit. So, concretely, the coercion of B to a property key in this line of code now happens after the evaluation of the right-hand side instead of before. Again, this is a web reality fix. And the second is landing the Stage 4 proposal for set methods for union and difference and so on.
+
+KG: There aren’t really any notable editorial changes. Lots of minor improvements, of course. I want to call out that some of the work Aki has been doing is reflected back into the specification, including, for example, taking a couple of extremely wide tables and converting them to tall tables, which is of benefit to the readers on the web. If you are watching, expect to see a couple of those things landing. That is part of the work previously mentioned to ensure that the document is in a print-ready state in future years
+
+KG: For upcoming work, similar list. But I want to call out that we have finally started documenting editorial conventions, or at least MF has. This is a longstanding goal of the current editor group to ensure it’s possible for authors to produce spec text which is consistent with the style of the overall document without having to infer the conventions by reading an 800-page document. If you have interest or feedback in that topic, reach out to the editors about it. That’s all I got. Like I said, a short update today.
+
+### Summary / Conclusion
+
+Two normative changes: "Delay ToPropertyKey call in `a[b] = c`" and "set methods" Editors have started documenting editorial conventions https://github.com/tc39/ecma262/wiki/Editorial-Conventions
+
+## ECMA402 Status Updates
+
+Presenter: Ujjwal Sharma (USA)
+
+- [slides](https://notes.igalia.com/p/5Tlry4MkK#/)
+
+USA: Hello, everyone. I hope I am audible. As you might have noticed, I am also not Ben. So thanks, all credit goes to Ben for all of this work as well as the slides. These have been few active months in ECMA402. We have done a lot of large and small editorial updates. I want to thank everyone for helping out, especially the 262 editors for vetting us in various ways. Let’s get into what we did.
+
+USA: First off, we used named regards for DateTimeFormat records [PR](https://github.com/tc39/ecma402/pull/826). The background, in various parts of the spec, these were arbitrary unnamed records which can be quite hard to read sometimes. So replace those with named records which are like, have a specific shape which make sense, since we reuse them anyway. It’s also helpful for Temporal integration because after, when we have to put the Temporal ECMA402 stuff into it, that will just work.
+
+USA: Also, Andre Bargul, one of our coworkers from Mozilla, has, over the years, done great editorial work. Unfortunately, that was a lot to read through and review in large. But finally, we have managed to get through all of that, and merge it [PR](https://github.com/tc39/ecma402/pull/827). So there’s like a lot of editorial backticks, as well as general editorial improvements. If you go through the ECMA402 spec, you will see that it’s much improved since not so long ago. And you have to thank them for that.
+
+USA: Then we have replaced AvailableCanonicalCalendars [PR](https://github.com/tc39/ecma402/pull/889). Thank you to the Temporal champions for that. It’s generalizing more how we deal with calendars. Now you can get the `AvailableCanonicalCalendars` from the AO that ECMA402 uses. We have a new one for canonicalizing them so you can do that without losing non-canonicalize calendars.
+
+USA: We clarified the structure of the record used for int `DateTimeFormat`. Basically using name records to fix unintended spec bugs and improve the readability overall. This was done by Ben. Then this is the language. So basically, in various parts of the spec, especially when returning say a list of certain values, like calendars or TimeZones, it was quite sort of all over the place, how we phrased that in the spec. Now, we have normalized that by using this terminology that is preferred in the ECMA262 spec already. And we were kind of behind on that. So yay for that. Then we replaced this.
+
+USA: Apparently, “is equal to” was something that we had no agreement on at various parts of the spec we used all sorts of things, and now everywhere we try to use exactly the same terminology, “is equal to”. And this improves the readability and takes out ambiguity from the spec. Then we capitalized some uncapitalized things. This is clearly unintentional. But yeah. Capitalizing L “list” and capital R “record” has a special meaning in the spec. If you weren’t familiar with that. And then the words without the capitalizing can mean anything really or just lists and records without, you know, the – what they mean in the spec. We fixed that.
+
+USA: Then we had some issues with the identifier text in ECMA-262 – well, in the integration. So real quick. Justin helped out with that. But yeah. This is hopefully helpful both in the context of Temporal, and the future Temporal integration as well as some of the ongoing proposals we are working on in ECMA at the moment. We had some issues with string index last-index-off which made us not aligned with ECMA262. I believe it was KG who helped us with that. Thank you for informing us, but actually helping us carry out this change to the editors and that was it. Thank you so much.
+
+### Summary / Conclusion
+
+There were editorial updates over the last two months, and we are constantly improving the editorial health of the spec.
+
+## ECMA-404
+
+Presenter: Chip Morningstar (CM)
+
+- (no slides)
+
+CM: You all already know about ECMA 404, right?.
+
+### Summary / Conclusion
+
+Nothing new to report. The standard remains stable and unchanging
+
+## Test262 Status Updates
+
+- (no slides)
+
+Presenter: Philip Chimento (PFC)
+
+PFC: Test262 is, as you know, the compliance test suite for ECMAScript. We have some exciting developments. We are finally landing in pieces, the giant pull request, with resizable ArrayBuffer tests. It’s been waiting for a review for a long time for many of the 250 or so files in it. And we’ve started dividing that up into manageable chunks and landing them one by one.
+
+PFC: After that is done we will use the same approach to land some of the other large PRs such as explicit resource management and decorators. So you can expect test coverage for some of these new proposals. I would like to remind everybody that we do appreciate reviews from champions of proposals as well because we have a small maintainers group and although we do enjoy diving into the edge cases of every new proposal that people write tests for, it’s also a lot of work. And we can do it with more confidence, if the proposal champions help out.
+
+PFC: And then finally, some exciting news: we’re working on landing a large supplemental test suite from Firefox thanks to work from DLM and Ms2ger. This is something that previously existed in the Firefox codebase and we would like to have it available in Test262 so other implementations can use it to find incompatibilities between engines and just generally get more coverage. That’s it for me.
+
+KM: Are the new Firefox tests going to be in a separate directory or interspersed with the old tests?
+
+PFC: We are figuring out where exactly to put them. But yeah, probably they will be in a separate directory.
+
+### Conclusion
+
+- Finally landing the giant PR with resizable ArrayBuffer tests, in chunks.
+- Will use this approach to land other large PRs such as explicit resource management and decorators.
+- As always, reviews from proposal champions are helpful and welcomed.
+- We are working on landing a large supplemental test suite from Firefox in test262 so it's available to other implementations.
+
+## TC39 TG3 status update
+
+Presenter: Chris de Almeida (CDA)
+
+- (no slides)
+
+CDA: TG3 continues to meet regularly. As of I think we mentioned the last plenary, every Wednesday now. Please join us, if you are able and interested. Our activities of late have been pretty much exclusively discussing security impacts of proposals that are in progress.
+
+### Conclusion
+
+TG3 meetings focusing on security
+
+## TC39 TG5 status update
+
+Presenter: Mikhail Barash (MBH)
+
+- [slides](https://docs.google.com/presentation/d/1gG1gE0Ggwv9krUpWOhLyvohXGJzCzWiFTOOQ-R83zGs/)
+
+MBH: A short update on the TG5. We have changed the cadence to last Wednesday of every month. Yesterday we had a successful TG5 workshop which was co-located with this plenary meeting and the plan is to have the next TG5 workshop to have a TG5 workshop co-located with the meeting in Japan. The TG5 meeting notes are also now available at the TG5 repository . Update on the activities: I gave a presentation about TG5 at a meeting of Standards Group of OpenJS Foundation. At the most recent TG5 meeting we discussed a user study for MessageFormat 2. We are planning to do this together with Michael Coblenz from the University of California in San Diego.
+
+MBH: Also, TG5 solicits suggestions for user studies and topic to discuss, so you are welcome to open an issue at TG5 repository, to share your suggestions. Thank you.
+
+RPR: Thank you MBH. Would you like to write or speak?
+
+### Conclusion
+
+Meeting cadence of TG5: last Wednesday of every month at 4PM-5PM Central European time. Planned a user study on MessageFormat 2. TG5 solicits suggestions for user studies and topics to discuss.
+
+## Updates from the CoC Committee
+
+Presenter: Chris de Almeida (CDA)
+
+- (no slides)
+
+CDA: The Code of Conduct committee continues to do code of conduct things. It’s been quiet. Not a lot of trolling or the like on TC39 repos in GitHub. We have had one issue we have resolved since last plenary. I would like to remind folks, we are always looking for folks to help us on the code of conduct committee. If you are interested, reach out to someone on the code of conduct committee. Thank you.
+
+RPR: So that is CDA’s recruitment pitch. If you wish to join the code of conduct committee, please do so.
+
+## Needs Consensus PR: ECMA-402: Specify time zone IDs to reduce divergence between engines
+
+Presenter: Justin Grant (JGT)
+
+- PR [#877](https://github.com/tc39/ecma402/pull/877):
+- [slides](https://docs.google.com/presentation/d/1U_kNIpJb89LTSFh7BBiFIJSpW_epDSnzn4XKtER4IyQ/)
+
+JGT: Today, this is the ongoing saga of TimeZone identifiers that we have been doing this for, I guess, around two years now, maybe a little longer. This is the next iteration. So I think I last spoke with the committee, maybe 2 or 3 meetings ago when we presented the Time Zone Canonicalization proposal that went to Stage 3 and got merged into Temporal.
+
+JGT: This is the next piece of it. A reminder for everybody, these are esoteric concepts. We have the IANA Time Zone Database, which is the source of truth about time zone information in computing, both in terms of the data about what UTC offsets are associated with which time zones and the identifiers that are used for those time zones. And so CLDR takes each release of the IANA, incorporating any changes to data and identifiers that have happened since the last release. And there is some variation between CLDR and IANA. We will talk about this variation today. And what we are doing about it. There’s about 600 of the identifiers. The normal ones we know like Europe/Paris or Pacific//Auckland, or UTC which is important for computing. There are two kinds of identifiers. ECMAScript "primary time zone identifiers", and in IANA this is called a Zone. An example is Asia/Kolkata. The other type is a deprecated name in IANA that will resolve back to a Zone. In IANA these are called links. Like Asia/Calcutta. The IANA database is case-insensitive but case-normalized. And what we will talk about today, is the process, where you get the Zone and follow any Links.
+
+JGT: Good examples of the kinds of links that happens, you can have the weird legacy ID. PST8PDT which is a Link to America/Los_Angeles. Cities get renamed, a rare case. Europe/Kiev was renamed to Europe/Kyiv. Finally, there are Links that are merges: when a Time Zone has been the same since January 1, 1970, IANA tried to reduce the size of the time zone database by saying, for example, America/Montreal is now going to be considered equivalent to America/Toronto.
+
+JGT: With that context out of the way, today there is an outstanding PR which is linked at the beginning of the first slide, and this is what we’re trying to solve is that ECMAScript engines don’t agree about which TimeZone ID should be primary or non-primary. CLDR (which is the source of IDs that is used by V8 and JSC) does not follow the ECMAScript spec. SpiderMonkey does follow the spec, which requires SM to have some complex build steps that require them to go out and get IANA files and use IANA’s rules and that creates some user-facing variation. When you have a time picker, you have different results, depending on which browser you’re using. With the system time zone, you get different results.
+
+JGT: Beyond that, the spec has wiggle room which makes this alignment harder. We are trying in this PR to tighten the spec to stop the engines from diverging with each other as well as acknowledging web reality now, which is that many engines use CLDR, and furthermore the CLDR implementation.
+
+JGT: The CLDR implementation, we think, is pretty good. And so we would like to change the spec to align to the current implementation of CLDR. By the way the reason we think it’s good is not an accident because we have been working with CLDR for the last two years to improve various things so it’s in a state to be used by ECMAScript. This PR also defines how to handle edge cases that have happened in the past and might happen again in the future. These don't actually affect any engines right now, but we want to prepare for that. Finally, this was approved by TG2.
+
+JGT: So this is really revolving around one line in the spec here that says, according to the rules for resolving link names in the IANA TimeZone database. We will tighten that up. The main difference between CLDR and how IANA works, CLDR, which we think is the right thing, requires that each non-primary TimeZone identifier should share the same country code as the primary identifier that it resolves to. In IANA, you can have merges between countries. So the current iteration, if you use the default build options of the IANA database, will take Atlantic/Reykjavík and turn this into Africa/Abidjan. They have had the same time zone rules since 1970. Same with Europe/Copenhagen and Europe/Berlin. And many others.
+
+JGT: We think that these inter-country merges make software more brittle, so we like the change that CLDR makes to ensure that changes made in one country don’t affect another country’s TimeZone data and we think that is important on the ECMAScript side. We believe that IANA’s decision is incorrect and we'd like to follow CLDR.
+
+JGT: So there are other merges inside countries. For example, Asia/Chungking was renamed to Chongqing (a rename Link), and then later merged with Asia/Shanghai (a merge Link). CLDR goes along with that merge because it's still in China, and we think that this is OK.
+
+JGT: Here is the proposed algorithm for ECMAScript in this PR: First, any ID that is zone in IANA should be a primary identifier in ECMAScript, other than the historical exceptions for UTC that will continue.
+
+JGT: Next is that it indicates `zone.tab` which has one line for every zone in IANA, plus one line for every ISO 3166-1 Alpha-2 country code even if IANA doesn't have a Zone (only a Link) for that country.
+
+JGT: And so any ID that is there, which matches this idea of one ID per country should be primary in ECMAScript. If an ID is a link, we look in `zone.tab`. If there’s more than one zone in `zone.tab`, then we look in IANA's backzone file which gives us a historical mapping for that Link inside the same country code. This spec algorithm matches how CLDR does things, or at least it will match after the next CLDR release to include a CLDR PR that was just approved.
+
+JGT: And what that means is, ECMAScript can call ICU or use CLDR data directly and they will comply with the ECMAScript spec. This is matching the web reality of how things work, but defining in a more precise way. I am not going to go through this long, complicated spec text, but this text matches what CLDR is doing.
+
+JGT: The impact for existing engines for V8 and JSC: they use APIs that are provided by ICU. A separate agreement that when Temporal reaches Stage 4, these engines will move to a different API that will solve the problem that currently exists today that we have talked about previously in the committee, where outdated ID like Asia/Calcutta are used. We want to wait until Temporal is out there because of some web compatibility concerns. In the SpiderMonkey case, they would be able to switch today to using this new API from ICU. And it should reduce the complexity of their build process because today they get the custom IANA files, they don’t need to get to anymore.
+
+JGT: Two other changes sitting in the PR. One is to deal with the problem that happens every few years when they rename a city. The proposal is to have a two-year waiting period. So we could introduce the new ID as a non-primary ID. Because browsers are updated frequently, what's happened in the past is that browser users get new IDs before everything else in their environment. So they might send the new IDs to a network device or some other software that hasn’t seen this before. The idea here is we wait two years, after a rename happens in order to swap the primary and non-primary. And really what that means is, this is a recommendation, it’s not a requirement in the spec. We will ask CLDR to implement the waiting period, as opposed to asking engines to special case if this happens. And remember that this is rare; every few years is typical.
+
+JGT: And the final change in the PR is that there’s currently a recommendation in the spec that is saying, while an agent is currently running, don’t change the TimeZone database underneath in an observable way. Nobody does this today. But we just want to make sure they don’t do it because it’s disruptive for developers.
+
+JGT: This is the spec text along with the recommendation. This is the spec text for the recommendation to require. And that’s it. Any questions or concerns?
+
+DLM: Thank you for this change, SpiderMonkey supports it.
+
+MF: Okay. So I am not qualified to really question CLDR, but I am going to anyway. Regarding the merging of time zones, it seemed that not merging across country borders was an arbitrary decision here, as in some places there are finer-grained localities that have authority over time zones and they are just as likely to change from each other. So what is the motivation for -- what is the reasoning behind choosing country?
+
+JGT: It’s CLDR’s decision to do this a long time ago. I can’t necessarily speak for them, but it tends to cause less disruption when things happen within a country. A lot of software tends to be country-specific. And people inside the country are more accepting of things happening, you know, because it’s their own countrymen doing it rather than one country over which they have been warring with for hundreds of years. That’s my best guess of why CLDR originally did this.
+
+JGT: Because much of web has been working this way for to long, I am not particularly – and because of the IANA TimeZone base has done this, if we were to make a decision on the ECMAScript decide to deviate from CLDR and from IANA, I think it’s more disruptive than it'd help.
+
+MF: I agree with that. I am supportive. But it does seem like we are only getting slightly better. And it’s not really a great solution. But it is better.
+
+JGT: Yeah. I agree.
+
+RGN: Clarification: it wasn’t just country boundaries we were looking at. But specifically, the ISO 3166-1 codes and there are many significant regions that have their own code, even though they are part of the same country. So it’s less arbitrary than it might seem.
+
+JGT: That’s actually a really good point. A good example is Norway, which owns some islands up in the North Sea that I cannot pronounce very well. But that is because it’s its own country code, it gets us a separate TimeZone and much of the – I would guess – intra-country ownership arguments are going to be covered by that exception. So it does tend to reduce the scope of the kinds of concerns you were thinking of.
+
+RPR: Okay. I’ve got a couple minutes for SFC.
+
+SFC: Yeah. Thanks for bringing this forward, JGT and continue to champion this change. We made the conclusion that the way the pull request as JGT describes in the slides, alignment with CLDR and Unicode, is really useful and important for us. This is how ICU and ICU4X implement this. And I think JGT made a good case, this makes sense for users. Establishing that alignment is very beneficial.
+
+SFC: And I know that RGN has been working with this and other editors, have been working closely with JGT on tweaking the spec text here. It looks like that has happened. I am in support of the change and thank you all for working on this.
+
+RPR: You also have a + 1 from Philip Chimento.
+
+PFC: This is a topic we have been discussing on and off the Temporal champions meetings for a couple of years and I would like to thank Justin for getting everybody in the group to understand how important this is. And I would like to give explicit support for this change.
+
+SFC: One other thing I forget to mention also is that there is – I think Justin mentioned this a little bit, but there’s CLDR-17111 which is about the legacy identifiers and that has also been resolved. At least the committee has discussed and agreed with that change as well. So I expect that that will also be reflected in CLDR, which means more alignment.
+
+JGT: Yes. there’s IANA and CLDR, and this PR we're discussing today is one of the very last pieces required to bring all three into alignment.
+
+RPR: So we have heard multiple votes – or statements of support. Are there any objections? There are no objections. So I think we can consider this approved.
+
+JGT: Thank you very much, everybody.
+
+### Speaker's Summary of Key Points
+
+- Time zone identifiers in ECMAScript come from the IANA Time Zone Database (https://www.iana.org/time-zones) or "TZDB". Unicode CLDR (https://cldr.unicode.org/) provides TZDB identifiers, exposed either as data or via the Unicode ICU API.
+- There are ~600 TZDB identifiers: most are "Zones" (called primary identifiers in ES) while some are "Links" (non-primary IDs) that resolve to a Zone. Today's discussion is about which IDs should be non-primary and what primary IDs they should resolve to.
+- CLDR deviates from TZDB's ID resolution in one important way: Links are ignored if they would resolve to a Zone in a different ISO-3166 Alpha-2 country code. We believe that CLDR's behaviour is better for ECMAScript, because it prevents future political changes in one country from breaking another country's apps. Also, it's what V8 and JSC have been using for years, so changing it would be less web-compatible than reifying it.
+- This PR also adds a recommendation to wait 2 years after a city is renamed (like 2022's Kiev=>Kyiv) before making it primary, so that evergreen browsers won't send unrecognized new IDs to other systems.
+- This PR also changes a recommendation to a requirement to avoid observable updates to time zone data during the lifetime of an agent.
+- In parallel with ECMAScript spec changes, we're also driving other changes into IANA's data and into CLDR, which further reduces divergence.
+- MF was concerned about CLDR's decision (which is web reality for many engines) to allow IANA's merges inside a country but disallow changes between countries. These concerns are mitigated by a few factors. First, the most problematic cases (like offshore colony-like territories) tend to have their own ISO-3166 Alpha-2 country codes. Also, merges only apply to timestamps before 1970, which are rarely needed in computing at <= 60-minute precision. Finally, intra-country merges often happen because pre-1970 sources turned out to be incorrect, so the merge is correct.
+
+### Conclusion
+
+ECMA-402 PR #787 is approved to align ECMA-402 with the "web reality" of CLDR data and ICU behaviour used by many large ECMAScript engines. This will reduce current and future divergence between engines.
+
+## Status of TCQ reloaded
+
+Presenter: Christian Ulbrich (CHU)
+
+- [slides](https://cloud.zalari.de/s/ML74dxx9PRKmmao)
+
+CHU: I am going to give a brief update. That was the title, last time. TCQ reloaded. This one is TCQ reloaded reloaded. So just to get back to what we did last time. This is a nice warm place where people gather. Some call it TCQ. We know TCQ. It’s – I am comparing it to a bonfire because it is a good comparison, warm and cozy and we love it but somehow we are still desperate, if it ever burns out. So what’s the structure of TCQ, well it is a great big ball of mud.
+
+CHU: And so what are problems with TCQ, the main one right now - is reproducibility. Observability. It is running on who-knows-what infrastructure. Some Azure thing. But we cannot easily restart it or deploy anywhere else. It’s hard to maintain. A lot of PRs have been piling up and we are not the ones, deciding when and what to merge, it is Brian and he does not. Of course, that’s not great for extensibility either.
+
+CHU: So we set out to do TCQ reloaded. And yeah. So this is what TCQ reloaded now looks like. It’s not a big ball of mud anymore. But half of a big ball of mud. So we took some part out of it, and then we painted some docker around it.
+
+CHU: Having a look back at the problems that we had, with the old TCQ, did we solve them? Well, I am afraid we did not really solve them right now. Of course, we can work with that. We showed last time, we can run it locally. But it’s still heavily bound to Azure. Reproducibility, observability and maintainability haven’t been really solved. And extensibility, either. Yeah!
+
+CHU: So the thing is; if software development weren’t so hard. So we kind of got lost a little bit. We started to do this and that and we took the wrong road of improving the architecture and stuff like that. Right now, we know we don’t have to innovate, but it’s clear to take a step back a little bit. And focus on iterating TCQ. This means, now it’s clear that the most important goal is to make it reproducible. We need to dockerize the stack. That allows us to do at least semi automated deployment. And then we should also decouple it from Azure-specific technology: right now, it’s clear that this is the road ahead. If we do that, we can implement typical software processes as well.
+
+CHU: The question is, when and who? Well, one, hopefully soonish. MF was telling me at dinner, I had 13 hours left.
+
+CHU: So yeah. So we already have some pieces here. So we already dockerized parts of it. And we want to do this, achieve it quite soon. Who? We will be doing it. But it’s [open](https://github.com/zalari/tcq). So you – anyone – I think once we have – we have reached the point where we can deploy it locally, we can start improving it iteratively. The goal is, that we have it ready for the next plenary and prove that it runs reliably reproducibly and then we can start innovating again.
+
+USA: Yeah. Thank you for all the work you’re putting in. I just wanted to point out, yet again, that, not that it hasn’t been already, retention is something that TCQ is really, really bad at. If any topic is skipped by accident, it’s gone forever.
+
+USA: And like as much redundancy as you can put into that would, you know, never hurts. So even if, like, you know, we could, say, download a history of all the items on the list, all the features will go a long way. And it would probably be some of the most useful additions or changes.
+
+CHU: Yeah. Sure. As I have said, the most important thing right now is to achieve the point where we have something that is stable. That we can finally implement new features in the future. Of course, implementing retention mechanisms, this shouldn’t be so difficult.
+
+CHU: Okay, you are welcome to add any potential feature wishes to the [repo](https://github.com/zalari/tcq).
+
+### Summary / Conclusion
+
+If only software development weren’t so hard. TCQ reloaded got off the track a little bit by focussing on architecture instead of getting it to run. We have figured out now that reproducibility is now the main goal, we need to achieve, before we are iterating / improving TCQ altogether. The actual goal is, that we have abstracted TCQ reloaded as much, that we can easily deploy it in arbitrary environments - both locally as well as online. We use this to deploy it ourselves for the next plenary and can thus validate that it successfully works. Building on top of that, we can then iterate on it.
+
+## eval() changes for trusted types update
+
+Presenter: Nicolo Ribaudo (NRO)
+
+- [proposal](https://github.com/tc39/proposal-dynamic-code-brand-checks/pull/17#issuecomment-2142865060)
+- no slides presented
+
+NRO: If you remember last plenary, we had changes to the `eval` and `new Function` host hook to support the Trusted Types spec and one of the changes that was presented last time was to pass the constructor to pass the fully constructed string to the host so the host can use it for like for the full behavior so use – developers can track where the different things are coming from and what’s been evaluating. We reached consensus on not passing the full functions string to the host. And instead, we will just pass the various pieces, so the parameters in the body, and expect the host to just re-concatenate the pieces together by themselves, duplicating the work that 262 does.
+
+NRO: And the reason for that we thought that that string was currently spec internal. It was not exposed in any way. However, it turns out that it is exposed through the toString method on the function return, but new function constructor.
+
+NRO: So given that removing that, we would like to add it back again. Specifically, the change is that now we build the host string and then we pass the parameter to the host that previously was not passed. There is an observable difference. That’s why I am here. The string includes the full function string. For example, for an async function, the string is `async function (`, parameters, and body. While before we were passing just the parameters and the body. So it was – it was not possible for the host to actually build the representation of the function.
+
+Yeah. So the repository I am presenting is updated. Are there any concerns with this?
+
+RPR: There are no concerns in the queue. Are there any voices of support?
+
+MF: I support this change.
+
+RPR: We have a positive from MF, DLM, and from ACE.
+
+NRO: Okay. Thank you.
+
+### Summary
+
+In the 2024-04 TC39 meeting we decided to _not_ expose the built string to the host, under the assumption that that string was spec-internal only. Our recommendation was that instead the host should re-concatenate the string pieces to build its own representation of the string.
+
+It turns out however that the concatenated string was already exposed to users, through `new Function(...).toString()` which (differently from most `Function.prototype.toString` behaviors, is _not_ implementation-defined). This PR thus re-exposed the concatenated string to the host to be used as-is.
+
+### Conclusion
+
+The PR https://github.com/tc39/proposal-dynamic-code-brand-checks/pull/17 has reached consensus.
+
+## Avoid second pass/buffer in base64 setFromBase64/setFromHex methods
+
+Presenter: Kevin Gibbons (KG)
+
+- [proposal](https://github.com/tc39/proposal-arraybuffer-base64)
+- [PR](https://github.com/tc39/proposal-arraybuffer-base64/pull/58)
+- no slides presented
+
+KG: So this is the Base64 proposal at Stage 3. And that means, it’s currently undergoing implementations and we expect feedback from changes to the proposal to come from users and implementations.
+
+KG: [first issue](https://github.com/tc39/proposal-arraybuffer-base64/issues/57). This is one of the two items for this meeting to address pieces of feedback that the proposal has gotten since getting to Stage 3. This one comes from PHE at Moddable. He points out that the method to write into an existing Uint8Array has currently specified work the way almost all the functions in JavaScript work, where they will do all the validation up front before making any observable changes to the thing that they are supposed to write to.
+
+KG: This is usually a reasonable thing to do. But in the specific case of these methods, this means that it’s impossible to implement these methods without either requiring a full pass over the entire input to validate it, or keeping a second buffer and then doing the decode and copying in. Specifically, this is in the case where the input is invalid, but is only invalid after a thousand characters in or whatever. As currently specified, that wouldn’t write any of the decode from the first 1000 characters to the buffer.
+
+KG: So I think that Peter’s point is good. It makes sense to not require the second pass or the second buffer for this case, even though it’s inconsistent with the usual stuff in the spec. You would only ever notice this is the case that your input is invalid, because of course if the input is completely valid to begin with, then it is going to end up written to the buffers. The difference doesn’t show up in this case. So it seems silly to slow down the common case of correct data just for the benefit of not having some garbage data written in the case of invalid data.
+
+KG: So I would like to propose to make this suggested change [PR](https://github.com/tc39/proposal-arraybuffer-base64/pull/58). I have spec text. It’s relatively straightforward. It’s just you keep track of where the error is, and also write the data that you have gotten prior to the error. It does have the slight consequence that in the case of an error, you don’t actually know where the error occurred. So you just have some un-knowable amount of garbage data in the buffer. That’s basically fine. The assumption is that if you have – if you hit this error, is that you are going to discard the buffer. That is not any other reasonable behavior. If people are really worried about that, we could work something out by putting a property on the SyntaxError that gives the offset of the error or something. But I don’t really think we should bother. Throwing an error and saying that the expected use in the case of an error is that you discard the buffer is completely reasonable.
+
+KG: So that’s my proposed change. Just in these two methods, setFromBase64 and hex, in the case you hit an error, you still write all the data that you have decoded prior to that error, thereby avoiding a second buffer or second pass over the input.
+
+RPR: We have a note that SYG passed ahead of time "V8 supports the proposed change of the performance benefits of streaming and bringing your own buffer have diminished if there needs to be a validation pass first". That is a voice of support.
+
+KG: Okay. Having heard support and not having any opposition, I would like to ask for consensus for this change.
+
+RPR: There is thumbs up from LCA in the room. So I think you have – yeah. You have consensus.
+
+KG: Thanks very much. And thanks to Peter for pointing this out.
+
+### Summary / Conclusion
+
+The committee has reached consensus on the proposed change which would make it so that you still write decoded data up to the occurrence of the first error, in the set fromBase64 and set of hex methods that are included in this proposal.
+
+## Option to omit padding in toBase64
+
+Presenter: Kevin Gibbons (KG)
+
+- [Issue](https://github.com/tc39/proposal-arraybuffer-base64/issues/59)
+- [PR](https://github.com/tc39/proposal-arraybuffer-base64/pull/60)
+
+KG: This is the second feedback for this proposal. Previous one came from implementers, this one comes from users. Although, I guess also implementers of other specifications.
+
+KG: As a reminder, Base64 as it exists in the wild has a couple of relevant variations. The biggest of which are whether to use the standard Base64 alphabet or the so-called Base64URL alphabet, which replaces a `+` sign with a `-` and the forward slash with an underscore. A number of systems require one or the other. We support both in this proposal. And there’s an alphabet option that allows you to pick between the two.
+
+KG: The other major variation among Base64 implementations is whether to include padding, the final one or two equals characters; it makes the total length of the output as a multiple of 4. There are cases where you need the padding. Although it’s usually redundant. But many decoders expect that padding to be there, most notably Python. Some expect the padding not to be there. In particular, a number of web specifications use the Base64URL alphabet and require absence of padding in those cases.
+
+KG: It is not terribly difficult to strip off the padding. You can of course simply check to see if the last one or two characters are the = sign and if so trim the string. But, these are other web specs; it’s nice to be easily interoperable with those cases. There was formerly an option to specify whether padding was included, but that option was removed at some point for the sake of simplifying the API. But given the feedback from users in real life, I think it makes sense to re-introduce it.
+
+KG: So this proposed change is to add an "omitPadding" dictionary argument to the existing options bag argument to the Base64 encoding API that just says whether or not to include =. The default is false. Which means the padding is included. I realize the naming is kind of awkward, but W3C guidelines for web APIs are clear that the default for any boolean option must be false and we want the default behavior to include padding, to match most Base64 encoders in existence. Including notably, A-to-B, the web standard Base64 encoder.
+
+KG: So there was some discussion of making the option depend on the alphabet. I am not really in favor of that. As a user, it’s weird that changing the alphabet would also change the padding. And if the option is present, it’s easy to specify if needed.
+
+KG: There’s really no consensus on whether or not the `=` is required to be present or absent when using Base64 for URL. As previously mentioned in a number of web specs to be absent. Python Base64 decoder requires the `=` to be present.
+
+KG: So given that, I think the simplest thing is to have the default be to present regardless of the alphabet and let the user disable it if necessary. Like I said, this option isn’t something that is necessary at all. You can do it with slicing but you don’t know how many `=` to use. Anything in the queue?
+
+RGN: The PR itself looks good. I agree with the decision to make the omitPadding default unconditional. I’m in favor of including this. Thank you.
+
+KG: Great.
+
+RPR: Any other comments?
+
+KG: Okay. Hearing nothing else, I would like to ask for consensus for the proposed change.
+
+RPR: No one is volunteering anything. From what I can see, you have at least got one message of support from RGN.
+
+KG: Good enough for me. I will take that as consensus.
+
+### Summary
+
+The PR adds an options bag argument, "omitPadding," which defaults to false, which is to say defaults to including padding, which allows specifying omit padding true to not generate the = padding, when doing the Base64 encoding. This is orthogonal to the alphabet argument. So its value doesn’t depend on the alphabet. The user can choose each one explicitly.
+
+### Conclusion
+
+This PR https://github.com/tc39/proposal-arraybuffer-base64/pull/58 reached consensus.
+
+## Iterator sequencing
+
+Presenter: Michael Ficarra (MF)
+
+- [proposal](https://github.com/tc39/proposal-iterator-sequencing)
+- [slides](https://docs.google.com/presentation/d/1gOs4UDAcaIF6Dc9z1qXus-ljizrRTSty5O-GbcM9NTs/)
+
+MF: So reminder from the previous times this was discussed: I had outlined these five goals for the proposal. We want to be able to compose 2 iterators, the most important goal. We also want to conveniently compose 0 or more iterators. We want to compose an infinite sequence of iterators. Or interleave non-iterators among the iterators. They can be composed as if they are an iterator of those things. And we want it to be discoverable or familiar to people.
+
+MF: We had considered a bunch of different possible solutions here. Variadic iterator from, remember that we currently have iterator from iterator helpers, but it only takes a single argument. Iterator prototype flat. Iterator prototype append, that's given zero or more iterators to append onto this value. an `iterator.concat` that takes an iterable, or iterator, and an `iterator.concat` that takes zero or more iterators. So the solution that I had presented at the last meeting, I don't know if I was attempting to go for stage two, but I think I was just presenting an update of my current thinking, was a variadic `iterator.from` and the `iterator.prototype.flat`. And the idea was that the goals that I had listed were actually separable. We're solving two problems, and that requires two solutions. So the `iterator.from` solution is when you have some small number of iterators or iterables, and `iterator.prototype.flat` is when you have an iterator, possibly infinite, of iterator or iterables. So feedback I had gotten was that, I don't know if I really want to read each of these off. Yeah, for time reasons, I'm just going to skip that.
+
+MF: You may remember the feedback from last time, but what that resulted in is I've changed the goals now. I have removed the goal for composing an infinite sequence of iterators, mostly because it was harder to justify that. It's not as easily justified as the other goals.
+
+MF: And I have added a new goal at the request due to KG's feedback last time, not having an observable difference between the 1-iterator case and 0-or-more iterators case. You shouldn’t be able to tell how you composed it to get to that point.
+
+MF: Given the new goals, I have a new solution. `Iterator.concat` passed 0 or no iterators to sequence them.
+
+MF: So we meet all five of those goals, the new goals, but it has one known downside, which is JHD has previously expressed concern about the name concat for this, something about it being related to `Array.prototype.concat`, and then developers making assumptions about the oddness of `Array.prototype.concat`, carrying over to `Iterator.concat`, which it wouldn't. So we obviously do not want the oddness of array prototype concat, which means that we may have to consider alternative names for such an operation.
+
+MF: I still think the solution is good. So I've listed out some alternative names. I've listed them in my preference order. I do not want to bike shed the name today. I'm showing these here only as evidence that I think there are a sufficient number of names that we could possibly choose from, and that eventually I do think we'd be able to choose an acceptable name that would make everyone happy.
+
+MF: So given that, I have written out the full spec text. This is extracted from my previous solution, which was from the variadic `iterator.from`. This is like the non-one case.
+
+MF: So I have full spec text available. And I have polyfill in the repo and tests for the happy path.
+
+MF: So I am only looking for Stage 2 here. I think I have met the requirements.
+
+DM: We support this and have no opinion on the name.
+
+RPR: We have thumbs up from LCA in the room. Thumbs up from DE in the room.
+
+KG: Is JHD here?
+
+RPR: JHD will not be here until the final session of the day.
+
+KG: Given that he’s the only person who expressed a problem with this name, we might – like, I am happy to go to Stage 2. We might want to revisit when he’s here, since he has expressed that he’s not in favor of this choice of name and probably need to give him a chance to object and override that objection, if we decide to override it.
+
+MF: That's actually the reason why I was being very clear that I'm not going for stage 2.7 here, I'm going for stage 2, which shouldn't require us to have settled on a name, just to have evidence that it is very likely we will be able to choose a name.
+
+KG: okay. I am happy with that.
+
+RPR: He didn’t put this in his agenda constraint. So he probably doesn’t have any concerns.
+
+RPR: Okay. We have a + 1 from RGN. And then we have a question from Justin.
+
+JGT: Yeah. I am wondering because there is a – there is already a concat and already a join, in ECMAScript, just out of curiosity, is this, from your perspective, is this operation more similar to concat or is it more similar to join?
+
+MF: You’re asking about an array prototype?
+
+JGT: Yeah. That’s correct.
+
+MF: given the option between the two, it is more similar to concat. As I noted before, concat has some strange behavior from a very long time ago that we are not intended to copy or try to emulate.
+
+MF: So JHD’s objection from previous presentations is that developers may expect that odd behavior, if we chose the name concat. Otherwise, I think concat is a very appropriate name.
+
+DE: For the particular issue of `Array.prototype.concat` having a different behavior from `Iterator.concat`, I’d like us to consider `Symbol.isConcatSpreadable` to be regrettable. With `Symbol.species`, we decided that we don’t like the feature, but we did an ES6. We won’t be held to follow the `Symbol.species` that in the future. We could do that with concat as well. The time between 2 and 2.7 will give us time to feel that way as a committee.
+
+RPR: Okay. So I think it’s been – sorry. KM may have further discussion on the concat.
+
+KM: Sure. I mean, I am happy to clarify that, yeah. Catting symbol is concat spreadable was 6 months work. Nobody use it is. I have never seen a single site use it. Yeah. Definitely. Please do not duplicate that.
+
+RPR: Please carry on, MF with – you want to ask for consensus?
+
+MF: Yes, I would like to ask for Stage 2. And assuming we do get it, I would also like to ask if there are any opinions on pursuing the infinite case that was dropped from this proposal now to advance, if anybody is interested in either taking that or wants to discuss that anymore.
+
+MF: Let’s first see if we can get Stage 2 and then I will be interested in that.
+
+RPR: Okay. I think we have heard support for Stage 2. Are there any objections to Stage 2? Okay. All right. Congratulations, you have Stage 2. And you want to continue?
+
+MF: Yes. We need to assign reviewers and I would like to see if anybody has interest in pursuing the infinite sequencing portion of this or thinks it should be pursued or maybe has the opposite opinion and thinks it should not be pursued. Without any feedback, I would probably not pursue it at this point.
+
+RPR: So NRO has volunteered to review and LCA has a question.
+
+LCA: What are the use cases for concat of an infinite sequence of iterators?
+
+MF: Yeah, so I have written generators that generate infinite sequences of iterators, or of iterable things. I wanted to then iterate all the contents of those things. But those haven't been in very realistic situations. They've been mostly like, oh, this is a handy helper for a test or something. I imagine that if it's a handy helper for a test, it's probably a handy helper for something else that somebody is doing. But it's certainly not as common as just saying, I have two iterators that I'm holding right now that I just want to iterate one after the other. It certainly dwarfs the other in commonality. So that's why I didn't want to have to hold up that part by trying to fully justify the infinite one. I think there's certainly some value in it, but it's going to be more difficult to prove that.
+
+KG: Then, yes, if you have a few extra minutes, I did want to raise the question of how to handle closing iterators to the committee’s attention.
+
+KG: Concretely, the protocol is something for closing iterators. The practice is any iterator that you open you should close. And any iterator that you in some sense get to ownership of, you’re expected to close when you finish it much – with it. Of course, if you don’t exhaust it. If you exhaust it, it has closed itself. The last bullet point here. For concat, for this static method, the – behavior is probably going to be that if you give it an iterable, it opens the iterable. And then closes it. If it gets there. But if you close the iterator, before that argument is reached, sorry, if you close the result of `iterator.concat` before the third argument is reached, there’s the question of what should you do about closing that argument? I think the best behavior is to try to figure out if it’s an iterator by looking up the next property. And if there is a callable next, then assume it’s an iterator and therefore you’re responsible for closing it, even though you didn’t open it. That’s a little bit weird. I think it’s important to close iterators that like you take ownership of the iterator. So this is of course something that will need to be worked out before Stage 2.7. I mostly want to raise the committee’s question, and to remind you it comes up when working with iterators.
+
+MF: Yeah. So that was part of the difficulty of doing the infinite case that I didn’t really want to have to deal with as part of this proposal. I think I will continue without trying to address that infinite case, even in a follow-up proposal. Thank you.
+
+### Summary
+
+- The previous proposal was attempting to solve multiple cases, including a small finite case, and an unbounded/infinite case. Trying to solve the infinite case was both difficult and harder to justify. The infinite case is now dropped from the proposal, to focus on the small finite case. We are solving it with a static `Iterator.concat` that takes 0 or more iterators.
+
+### Conclusion
+
+- Iterator Sequencing has Stage 2
+- NRO, JMN, and RGN to be Stage 2.7 reviewers
+- The proposal no longer attempts to address the infinite case, and there is no desire at the moment to pursue that in a separate proposal
+- Some consider `Symbol.isConcatSpreadable` to be a regrettable error that we aren’t carrying forward, like `Symbol.species`
+- During Stage 2, we will choose a final name, which may still be `Iterator.concat`
+
+## Async Iterators Update
+
+Presenter: Kevin Gibbons (KG)
+
+- [proposal](https://github.com/tc39/proposal-async-iterator-helpers)
+- [slides](https://docs.google.com/presentation/d/1cjCkBRWwNFu01HUEcWQ6AsSgVGOxTj4cVvz_9XCyAkw/)
+
+KG: Okay. So hello. I have a 45-minute timebox for this item. And while I do have a fairly long presentation, touching on a number of different topics to go through, it’s not a 45-minute presentation. The purpose is to invite discussion. To that end, if – while I am presenting on something you have a thought or even just a – like, you want to think more about it, or talk more about it, for anything that I have brought up, please jump in and bring that up right away. There’s no need to wait until the end of the presentation. Because there’s a bunch of stuff in here. Like I said, I will be touching on quite a few different topics regarding the design of async iterators. With that said, let’s go through it. There’s been some discussion happening in the issues of the repository, if you are interested in following along through there.
+
+KG: A reminder, the fundamental thing happening here, and I have had this same slide up in 5 presentations now, is that AsyncIterators, which have existed in the language for a long time, but haven’t had any particular utilities associated with them, turn out to be designed in such a way that it is possible that we can use them as a concurrency mechanism. The only AsyncIterators in the language right now are from async generators. AsyncGenerators are not a way of writing concurrency because they are inherently queuing. This falls out of the fact that they are defined using syntax. In this language, at least, syntax is a single flow, you are not in two places at once with syntax. AsyncIterators don’t have to have that restriction. And in particular, AsyncIterator helpers which are the counterpart to Iterator helpers could easily lift that restriction.
+
+KG: For example, if your mapping function which you pass to `%AsyncIterator%.prototype.map` returns Promises, you could easily have something that pulls to – that allows you to pull two Promises from the result of this map operation. And wait for both at the same time.
+
+KG: Whether or not this would actually do something concurrently, depends on the design of AsyncIterators. The reason that AsyncIterators have not advanced is because we have been working on designing to allow this kind of concurrency.
+
+KG: This is really nice in a lot of ways. It gives you concurrency between different calls of the call back function, as well as between the call back and the underlying iterator. It preserves the important property that the concurrency is driven by the consumer. You don’t have issues with back pressure. It is able to consume values.
+
+KG: And to make this easier to use, the intention is to include an extra method to buffer values. So this is not something that will do arbitrary buffering. It won’t continue pulling from the underlying thing. But have a specified number of slots, and when you start iterating this, it will pull that many times concurrently from the underlying `this` and store from the buffer when you pull from the result of this helper.
+
+KG: And I learned after I suggested this, that in fact Rust has basically this same thing. This [on screen] is an example of some Rust code. It is doing something similar.
+
+KG: It is doing this buffered operation on the first map, allowing the skip page operation concurrency. Now, Rust streams in futures are different from JavaScript promises, in particular they have this executor module where you have to be driving the futures, rather than the promises doing things on their own.
+
+KG: But while that will have some relevant differences for later in the presentation, for the basic buffer helper, it’s pretty much identical. So that makes me feel better about it
+
+KG: Okay, so other than that, what are we leaving out? Well, lots of stuff. This is the current plan; the proposal is only at stage two, this is all up to be changed if you think it should change, but this is my current plan. We've been going back and forth on a bunch of this, and I've sort of settled on this particular set of things to exclude and include for this version of the proposal since the last meeting, or since I last presented this anyway. So what are we leaving out? The most important thing by far is that we are leaving out unordered helpers. So these helpers are inherently order-preserving, which gives up a lot of concurrency. There are a lot of problems where you don't care about the order that the tasks, the final values come in. You have a bunch of work to do and you just care that the work gets done. Being order-preserving gives up a lot of the concurrency, and in fact I have this lovely animation from a Visualizing Rust Streams project that is showing the Buffered helper. The idea here is that these five boxes represent the buffer where the map operation is happening concurrency five times. And the items are hanging out in the buffer. I want to point out, the middle box is filled up, but it can’t be consumed until the ones ahead of it in the queue also complete so you can get – you can have the order preserving property. We are giving up a lot of concurrency.
+
+KG: Rust has another helper, called buffer_unordered. I was originally intending to include in the proposal but am no longer intending to. It looks like this. The values from the buffer are able to feed into the sink or be pulled into the sink as soon as they are ready instead of waiting for earlier values. This recovers much of the possible concurrency. But not all. There’s a number of problems. And that’s why I have decided it doesn’t make sense to include in this version of the proposal. I think there’s a lot more room to explore sort of primitives or helpers for doing unordered concurrency. Since buffer unordered is not sufficient to get you full concurrency for all problems, I don’t want to include it until and unless we decide that it makes sense to include despite that limitation.
+
+KG: Right now my plan is to include only the simple-ordered buffer that gives you a good bit of concurrency with the constraint of result being produced in order. And there’s some interesting designs for unordered helpers, which we will touch on briefly later. MF especially is interested in exploring this space. So for anyone in the room, if this touches your fancy, go talk to Michael about it. My intention is to not include any unordered helpers in this version of the proposal.
+
+KG: On a related note, it’s not going to include any way of doing concurrency with the _consuming_ helpers. For example, `find`. If the search function is asynchronous, this proposal won’t have a way of doing that search concurrency. And it is always possible to do this by an awkward sequence of map, buffer, filter, take(1). But it’s often quite awkward. There’s not one obvious best of doing it. The simplest thing would be to have a concurrency parameter that specifies a degree of concurrency for these helpers. And I am not opposed to that, but I think in the course of addressing the previous thing that might affect the design, so I don’t want to include that right now. I do think we have usually found that adding a second parameter like this is web compatible. But it’s not a guarantee. It’s possible we might find we like that design, but we can't do it for web compatibility reasons. It’s likely we will be able to but I wanted to mention that possibility.
+
+KG: So I see LCA asking about why buffer unordered is not fully concurrent. Let’s see if I can get there. I will skip ahead. Okay.
+
+KG: The last couple of minutes I was talking about leaving out concurrency for the helpers. Why isn’t this good enough? This [on screen] is sort of a representative example. So just to talk through this code a little bit, imagine you have an AsyncIterator that produces two values. The first one it produces slowly. The second one immediately. You are filtering over this. So the basic filter predicate – or helper necessarily has to resolve its promises in order, at least prior to the end of the underlying iterator. It doesn’t know if the result of the second call to the predicate is going to end up going into the first or second promise. You can’t know this without knowing the result of the first call. So even if you do a buffer unordered on the result, it doesn’t help you because the result of the filter is constrained to settle its promises in order, to ensure it doesn’t have the problem of the second thing resolving with a value and then discovering that the first thing that was rendered is, in fact, not present. Like, if the predicate returned false for the first value, if the helper has already settled the second promise, it’s stuck. There's no reasonable thing to resolve the first promise with at that point. As far as I can tell, in Rust this is just the case. Like, buffer unordered, if you do a buffered unordered over a filter, you don’t get concurrency or the concurrency that you would like. And there’s nothing you can do about this. You really need the filter helper itself to be aware that it is free to reorder the result of the stream in order to get concurrency in this case. I’m sorry. It’s 3 a.m. I don’t know how coherent I am.
+
+LCA: I think that makes sense. But isn't this solved by moving the buffer unordered before the filter?
+
+KG: Not if the predicate is itself asynchronous.
+
+LCA: I see. Okay, thanks.
+
+KG: Okay. So like I said, not planning on including any unordered helpers right now or any way of doing concurrency for foreach and find and sum and reduce and so on. It would be really nice to have these be concurrent, but there’s not really an obvious way of doing it. I am planning on leaving that out, at least pending resolution to how we are dealing with unordered map and so on.
+
+KG: Another thing I will leave out but intend to follow up with is some way of racing multiple promises to get an AsyncIterator that yields the promises as they resolve. So we have all used `promise.race` for this. It is – if you have a bunch of work that you want to do, it is a pretty common pattern to start all of that work and promise race the results of that work. And then whenever you are done, `await Promise.race` of the remaining values. And a helper that takes such a collection of promises and gives you an AsyncIterator of the promises in the order in which they resolve would allow you to do that in a really concise manner, as in `for-await (item of AsyncIterator.race(items))` or whatever.
+
+KG: It’s just another way of turning an iterable of promises into an AsyncIterator. It’s more like a promise helper. If we had a buffer unordered, it’s somewhat redundant. If you do a buffered unordered and pass it, as its bufferSize parameter, the size of the underlying collection, then that’s what I have just described. So I am not planning on including this right now. I just wanted to raise it as a probable follow-up.
+
+KG: A similar thing is if you have multiple async iterators, then you might reasonably want to merge them by raising promises. This is really useful. It's potentially even more useful in combination with possible other things. This pattern that was raised in one of the issues on the GitHub repository actually ends up being really nice. Let's see, I think I have a slide about this coming. Basically, you can imagine something that takes an async iterator and divides it into multiple async iterators, which all pull from the same source. And if we had something that would combine them at the end, this lets you get concurrency in a completely different way than the async iterator helper concurrency. This allows you to create a work queue and then define the work for each queue separately, and then collect the values at the end by merging the resulting async iterators. Something that's really nice about this example is that diff actually doesn't rely on map or filter being concurrent, because the concurrency is between multiple async iterators rather than within one async iterator. This is a really cute pattern. I don't know that it's necessarily the direction we want to go, I just want to raise it as a possible design for getting unordered concurrency in a follow-up.
+
+JWS: Just a question. If the example on the repo is correct?
+
+KG: Which example?
+
+JWS: On the AsyncIterator repo. The README. I was just checking if `X =`
+
+KG: Yes. This should have an `X =`
+
+JWS: That was all.
+
+KG: Yes. Good catch. Okay. Fixed.
+
+KG: Yeah, so I think splitting, this sort of split helper won't be in the end the minimum viable proposal. It may or may not make sense to include in a follow-up, but I'm mostly raising it as an example of where we might want to go for getting concurrency, a different form of concurrency in the future. Crucially, I don't think it conflicts with anything that's in this proposal.
+
+KG: Another thing that is left out is any mechanism for limiting concurrency. There’s two senses of what you might want to limit concurrency here. The concurrency of an AsyncIterator because this is allowing you to pull multiple times. I don’t think it makes sense for buffer to be this mechanism. It could, in principle, be, but I think it’s weird that adding a buffer would also restrict your concurrency. The buffer should be a buffer. But it is something that you might want to at least in some cases, if you are creating an AsyncIterator to be consumed by others, or if you are consuming an AsyncIterator that someone else has given you as part of the contract, it could be only pulled from a certain number of times. A belated things that I am needing many times is limiting concurrency – if you are talking to some API, you probably want to limit how many times you are talking to the API at once. Limit it for you. And I do think we ought to do that. It would be useful for proposals, but useful for many other things and I am interested in pursuing that work separately.
+
+KG: So basically, we have the absolute minimum possible set of things. We have map, filter, flatMap and to async. Each will have their affordance for concurrency. They allow you to pull from them multiple times and for this to kick off additional work in at least some cases. This will allow you to kick off additional work. If you pull from any of them concurrently. And there will be a buffered helper for doing calls to those things concurrency. It’s not the only way of doing those calls. It’s a convenient mechanism. You can of course call .next yourself, if you like. Otherwise, it will include all the helpers from the synced iterator helpers with no additional affordances for concurrency in any of the others because none of them make sense for. With one caveat, which is drop and we will get to that in a minute.
+
+KG: So that’s not to say we have settled everything. But we have settled a lot of things. I want to talk about some of the discussions that we have had and directions we have settled or not.
+
+KG: First, a fundamental design question. Backing up a step, concurrency is tricky. I think it’s very important to have as strong of guarantees about your results as you reasonably can. The original guarantee that I wanted to have is that you get the same result in the same order as if you called – made the calls sequentially, assuming your mappers are pure and that sort of thing. If you have side effects, you are not guaranteed they will get that in the same order, but the results are. I now think that’s too strong. In particular, I think that we should not have the guarantee in the case of an error. Without this caveat on the second paragraph, then map would be constrained to settle its promise the same filter is. And this is sort of my example for it.
+
+KG: If you’re mapping with the natural numbers, 0, 1, 2, 3, et cetera, suppose that your mapper eventually throws for 0. And otherwise, it resolves immediately. If we had the original consistency property saying, you are guaranteed to get the same results in the same order, if you await the first result, then you would get an exception. And that would close the iterator. And then when you await the second result, you would get `{done: true}`. So if we want to have the same behavior for concurrent as for non-concurrent, that means we have to have that behavior for concurrent pulls. And there isn't a way to do that except for the second promise to wait for the first one to settle so that the second promise could know if there was an exception. And I think that is giving up too much. I think that in the case there’s no errors, It’s possible to settle earlier. It just means that you have this caveat that if one of the earlier results, if the call back throws or the underlying thing yields a rejected promise, then the stream of values that you see is different. In particular you see an exception and then something that is not `{done: true}`. And that is otherwise impossible to observe. I think that’s okay. The error case, I think, is not the one we should be concerned about. And I don’t want to give up this possible concurrency for the non-exception case just to get a more consistent view of the word in the case of errors. So my intention is to weaken the story of what guarantees you get. Like I mentioned, even with this property, it doesn’t get you filter settling out of order.
+
+KG: I will skim through some of the next items. We mentioned closing iterators in an early presentation. If you call dot return, that closes the iterator and it will immediately call return on the underlying iterator. Being closed means if you call .next on the result, then you get done true immediately. But it doesn’t mean that any previous things are resolved. `.next`, `.return`, the first values, the ones that you got before calling dot return might still be outstanding and settled later. Any future calls will resolve with done true and forward the results. The last thing, later calls, could go either direction. But being closed in the sense is not just the behavior. If the callback throws, that’s the same as return call.
+
+Yeah. And then this caveat, this question: what does dot drop do? Does this wait for the promises that it’s dropping or does it just like literally ignore them? My inclination is it should wait for each promise so any exceptions are raised. I think if you call dot drop you probably don’t want to go past exceptions. The most predictable thing is for drop to still behave sequentially in the sense of awaiting before pulling the next one, as the default behavior. I don't know whether to include the Boolean parameter.
+
+KG: Another one is that when you can `buffered`, it starts filling up the buffer only when you pull from it. This is consistent with iterators being lazy in general. But it’s actually hard to do the behavior of starting to fill the buffering before pulling from it in user-land. So I think having an option to eagerly pull would make sense. Probably a second Boolean parameter. Either an options bag or just a Boolean that says, start immediately.
+
+KG: And yeah. One last thing. For dot buffered, there’s discussion of if you pull from the buffer, more times than the buffer has slots, do the promises that the buffered helper has rendered count towards the buffer? If you say you have 5 items in the buffer, and then you take one out, but the thing you have taken out hasn’t settled yet, does that count against the yes? That promise count against the buffer? Specifically, this example on the slides here, if you’re doing a buffer over some map function, you really expect this to only have 5 outstanding calls to the call back at once. Not 6. But the very first time you enter this loop, the call to dot buffer, calls 5 item. The loop takes the first out of. The buffer will only have 4 things in it. This would be – lead to calling the call back 6 times concurrency. I think the answer has to be that the vendored promises count towards the buffer. For the common case, that’s the only reasonable behavior.
+
+KG: So those are my questions. Just to summarize the direction of the proposal. The plan is to do the minimum viable set of helpers. Which is just map, filter, flatMap, with concurrency. And the buffer helper. May or may not add a boolean parameter for buffer or drop. There's a bunch of interesting directions to go in a follow-up. I think this design leads to room for any possible future design for helpers. For unordered helpers in concurrency for each and so for. I would like to work for advancing the proposal with this minimum set of things.
+
+KG: Great. Thanks for your time.
+
+ACE: (I'm curious if there have been discussions about 'global', as opposed to 'method local', concurrency.) I love this proposal. Concurrency is like what concurrency of promises comes up a lot in Bloomberg internal chat. People asking how I can do this?
+
+KG: You dropped. Or I dropped? Ashley, if you are speaking, I can’t hear you.
+
+ACE: This is great and it definitely works really nicely for small sections. The thing I have seen over the years though is sometimes concurrency doesn’t work well when only applied locally. Like, lots of little functions read an array of file paths and read the files concurrency. When that is happening with other parts of the application, it blows the operating system's limit for how many files should be open at a time. The concurrency on this, needs to be managed on a global scale. Are there methods here that would help doing that? Or the answer no. That’s not part of this. That will be something else. Just curious.
+
+KG: Yeah. That’s a good point. The answer is no. There is not anything in the proposal that will help with that. The only thing that is that direction is the second part here, some way of limiting concurrency of call back, either a new concurrency primitive or something easier to use that is just wrapping up a call back or a collection of callbacks. And at most, calls to these functions will be outstanding at once. And future calls will get queued. But an alternative design is perhaps to have a task queue or something like that. For which, talk to Michael. But the short answer is no, nothing in this proposal.
+
+ACE: That is my understanding. Just confirming. Thanks, KG.
+
+USA: Next on the queue we have JWS.
+
+JWS: (Is the plan to resolve unordered buffers before stage 3 or add it as a follow-up proposal?) Hi. Yeah. It was just to clarify, is your plan or goal to see this all the way through and then pick up unordered separately or are you hoping to try to resolve the unordered buffer as part of Stage 3? Before Stage 3?
+
+KG: My plan is not to include the unordered, to include only the simple things and leave unordered to a follow-up. I am not going to promise to do that follow-up. There’s a lot of interesting space there. And if I have the time and resources, I would definitely like that. But I don’t want to give a false impression that it will be like an immediate subsequent thing. My plan is to include the only minimum viable things in the proposal and then hope that either I or someone else has time to explore the space of unordered helpers now later. But not in this proposal.
+
+USA: Next we have MF
+
+MF: (support for splitting off unordered async in the same way we split off async) I wanted to go on record that I am very supportive of splitting off the unordered problem space from this proposal. As Kevin said, I am a big proponent of exploring that space. And I think in contrast to how when we typically split off proposals, where we split off less justified or less important parts so we can make sure we get the important parts through and not hold it up, in this case, we split these out because it was so important to make sure we got them right. And I think we can do similarly here with unordered helpers and not try to push it through with everything else and split it off to make sure we spend the appropriate time and resources on this problem space.
+
+KG: Agreed. And I do want to emphasize, there's a lot of possible design space here.
+
+USA: You’re also in the queue on this.
+
+MF: (buffered eager start parameter is important) You mentioned the buffered helper could take an eager parameter. I don’t think we should wait to follow up with something like that. That’s so fundamental and necessary that we should include it in this MVP. I would really like to see that parameter there.
+
+KG: Sounds good to me.
+
+USA: Next we have RGN
+
+RGN: (this is looking great; thanks for the focus on concurrency) Yeah. This is a great update. I really appreciate the attention to detail and the focus on concurrency. And I love the direction that it’s headed in. So thank you.
+
+KG: Thanks very much.
+
+USA: Next LCA.
+
+LCA: (I'm very happy to see this is moving forward, even without unordered concurrency) I want to say the same thing. I am excited this is moving forward. I think this will be a great addition.
+
+USA: Thank you for everyone's comments everyone. That’s it for the queue, KG.
+
+KG: Okay. Great. It sounds like the committee is happy with this direction and this minimum set of things for this proposal. So I will – I hope to put something together and bring it back to committee as soon as I can. And I am expecting to be busy for the next couple of months so I am committing to get that ready by the next meeting, but hopefully as soon as possible. I’m sorry this has taken so long. But I think we’re now at a place where we have at least a reasonably coherent set of things to bring forward.
+
+KG: Before I finish, I did want to touch on the other two open questions that are raised. My intention for dot drop is to have it be sequential, rather than concurrent. If you drop that, pull one thing, await the result. No concurrency in drop. I guess I will add a parameter that says, that – you want to drop eagerly. I think that’s sufficiently useful. Not eagerly in the same sense, but eagerly. You know what I mean. Dropped concurrently. Opt-in Boolean parameter for dropped in concurrency. I will have the parameter that Michael expressed in support. And they will count towards the – that makes it more complicated, but not that bad. No one objected to any of those, so… Yeah. To summarize briefly – no. I see we have something else in the queue.
+
+USA: DE in the queue next.
+
+DE: (Retrospective: How bad a job did we do on async generators await in yield?) So looking back for async generators, we made the decision a while ago when you yield a value it will await it. Kind of reducing the amount of parallelism, where you could yield a promise and only be awaited when you like await the next result. I am kind of curious how bad of a decision that was. I guess, you can still recover that parallelism of having a sync generator that yields promise and do async on it and use one of these methods. Isn’t actionable. I am curious what your thoughts are now that you have thought about AsyncIterators.
+
+KG: Yeah. That’s an excellent question. And I think the answer is somewhat surprisingly – it’s not actually the wrong design for async generators. For the specific case of map, and to a lesser extent flatMap, not awaiting the result before proceeding would get you some efficiency. But that isn’t the case for filter. Because for filter, the decision about whether or not to yield the value depends on the value. So for filter, it’s not actually possible to write something that looks like an async generator in any way. And has concurrent filtering behavior, because the thing that is actually concurrent is the decision about whether to yield. Which has to be syntax outside of the yield. So I think that this actually just comes down to syntax not really being suitable to give you full concurrency – or at least the syntax we have in the language. Other languages have something that is different that would with structured concurrency and so on. But the decision – even if we had decided that you don’t await yielded values, that doesn’t get you even as much concurrency as this proposal gives you with filter being able to be concurrent. The syntax is more limited or inherently limited. It’s not the decision about await that causes that limitation.
+
+DE: Interesting. Thank you.
+
+USA: Thank you for all the discussion. Thanks, Kevin. Would you like to summarize the key points?
+
+KG: I will do that off-line.
+
+USA: Okay. And would you like to dictate a conclusion? I don’t know if it’s –
+
+KG: Yeah. I wasn’t asking for advancement for anything. So I will do that off-line as well.
+
+USA: All right. Thank you.
+
+### Speaker's Summary of Key Points
+
+The current plan for the proposal is that only `.map`, `.filter`, `.flatMap`, and `.toAsync` will have any affordances for concurrency, with `.buffered(N)` as a helper for making use of that affordance. These helpers will all be order-preserving, despite that giving up much possible concurrency; the space of unordered helpers is vast and should be explored by the committee as a follow-on. No other things will be included in the initial proposal except for those which are also in the sync helpers.
+
+On the specific questions raised:
+
+- the committee was in favor of `buffered`, taking an opt-in parameter to start filling the buffer eagerly rather than waiting for the consumer to start pulling from the buffer promises which have been vended by `buffered` but not yet settled will count towards the buffer no expressed opinions on `.drop`; the champion's preference is for it to be sequential, with maybe an option to opt in to the concurrent+exception-discarding behavior
+
+### Conclusion
+
+- Proposal was not seeking advancement, but the committee is in favor of this direction.
+
+## `Intl.MessageFormat` Stage 1 open question involving error handling design patterns
+
+Presenter: Shane F.Carr (SFC)
+
+- [proposal](https://github.com/tc39/proposal-intl-messageformat/)
+- [slides](https://docs.google.com/presentation/d/1kyQqhoc4utHer6o0Gomf7a9rgLwEFHobMLOL6FBlBs0/)
+
+SFC: Excellent. Cool. Thank you all for taking this topic. So I am Shane. Many of you know me here. I am not really a champion for this proposal, but I am kind of, I guess, by default sort of a champion now, because I am giving this presentation about `Intl.MesageFormatt`. This is an issue regarding this proposal. So let’s go ahead and dive in.
+
+SFC: So in the – in the unicode MessageFormat specification, there is a big focus and emphasis on how we handle – error handling. There’s multiple types of errors. And this is for unicode MessageFormat. Not the `Intl.MesageFormat`. This is the specification we are focussing on building an Intl API around. The unicode specification has many errors. Message errors, a SyntaxError. You know, that’s basically handled when taking and processing the message or the data module for the message. The other type of error is resolution errors and they can occur – cannot be detected until you are formatting the message. So after you are giving the placeholders into the format function of the MessageFormat object. That’s when the second class of errors can be detected.
+
+SFC: So the specification, the unicode message for that specification tells us that in all cases when encountering a runtime error a message must provide some representation of the message and informative error or errors must also be separately provided. So figuring out how to encode into the intl message is very important for us.
+
+SFC: Message errors, handle in the constructor, it’s already the case that many intl objects throw exceptions in the constructor. If there’s a RangeError, if the argument doesn't make sense, that’s already a case.
+
+SFC: The challenge here is what do we do with the second class of errors during the format function. This is case we haven’t seen too much. In ECMA402. There are some cases where we see it. There’s intl objects that tend to throw the function. They can do three. We don’t do this anymore, but we have done this in the past. If throwing in the format range function. And also three if, for example, in intl display names, if you pass in an invalid language code into the format function of the did, – so there are some cases where we do error handling in the format function. But they’re not very many. And definitely MessageFormat is the biggest case where we think about this problem.
+
+SFC: So there’s three directions I want to layout here. And this is the topic we discussed at the TG2 call last month and the conclusion of the TG2 call as we went over the options but we wanted to bring to plenary because it’s not an Intl decision. This is like a JS standard library design. That’s why we wanted to bring it to plenary for discussion. This is more like an API design discussion. These are three options that I have laid out here on the slides.
+
+SFC: So Option 1 is what is currently in the proposal, which is an error handler call back. So the way this works is, the format function in addition to taking the values on parameter also takes on error call back. The on error callback is passed in, when it occurs. And does whatever it wants with the error. If the on error function is not specified, the error with be I go Ford and best-effort replacement value is used. It case in the specification that the errors may be – may be a warning shown to the user. I guess that could show up in `console.log` maybe. But that’s the behaviors here. On the current behavior again is that these will return the string.
+
+SFC: Option 2. An expressive error object. This would mean that the format function throws an exception and catches the exception and doSomethingWith. In a MessageFormat error. Contain a field for accessing the best effort replacement thing. Option 3 we could have a more expressive return value. A lot of the work that we have done in other platforms like we don’t – we always want to return more information than we can express in a string. So in those other areas, we return a formatted value which has a toString function. And also additional annotations. Similarly in intl. We don’t do this, but this is a good opportunity to start in this direction, if this is a direction we think is beneficial for us. Return an upgraded type format message. Converted to a string and have other fields for inspecting errors. Yeah.
+
+SFC: So here are some code samples. I am suggesting early that I add them on to the presentation. The issued handler callback is the first one. Option 1. In this case, the `console.warn` is a function. You can pass whatever function you want. A close sure or whatever and invoked every time there’s an error and you can do whatever you want with the error. Option 2 is the expressive error object. I showed what you have to do if you wanted to write a little snippet of code to always capture like the fallback message. I realize in the previous slide, I said this is called message. Here is a fallback message. Bike-shedding aside this is what the call site looks like. Option 3 is the expressive return value. In this case, this resultObject gets returned might have fields like an errors field, array. Then a valueOf converted to a string. Those are sort of three directions we came up with. It’s possible we are missing things. If there’s any other directions that other [del]… these are the three brought up in the discussions previously. These are pros and cons. The fallback string is always returned and available by default. The main drawback that delegates have raised handling error requires indirection. Which is a bit strange. Option 2. Clear common design pattern. If there’s error, you can catch the error always. So it’s not like implicitly ignored.
+
+SFC: A downside is that it has a risk for data-driven exceptions. You think that errors could happen and then when it’s deployed, there is exceptions. When the specification defines well-defined fallback behavior. This depends on what you do in these cases. So there is reasonable fallback behavior. So the downside is that if you didn’t have the try attach, you can do something wrong. You will be using the fallback message.
+
+SFC: And probably returns only a single error at a time. The other two sort of can support returning lists of errors. Option 3 is the expressive return value. An advantage is that it is similar to Option 1. The fallback string is always available. And relative to option one is that there’s less indirection. Your errors are collected for you and available for you. I sort of think like RegExp match. The MatchResults returns to you an array and you do you want. Instead of capturing in a fallback method. The downside is that it’s a new thing. Return value is not actually a string. And option and 3 errors are easily ignored. It’s not really a form or con. It’s a note that it’s errors are easily ignored. I am not sure if that’s a pro or con. Some people feel either way about that. So I just listed as a note
+
+SFC: That’s my last slide. Back to the code samples. If we can go to the queues, I hope people can weigh in there. We have quite a few which is nice.
+
+DLM: Sure. I guess my strongest preference is against the throwing. It’s easy for someone to forget a throw. And it’s very common for localized strings to be missing. So in that case, what you want the fallback behavior anyway. Beyond that, I think I have a fairly weak preference for Option 1. It just feels like the most straightforward. In most cases you want to fallback string and that’s going to do it for you. If you want to do something more sophisticated, then you can pass in the on callbacks. It feels like that design handles the common case quite well.
+
+JWS: So from what I have seen in the user space and intl libraries, Option 1 seems to be the most common. They tend to show the fallback message with the related values. If they can’t look up the local data or like you said maybe there’s a type value. I know we try to gracefully degrade and the often they handle – preferred to show the failed throwback. If people – need to do a try catch. And that starts to become cumbersome. I also like number 1 because the format method is consistent with the other format methods with DurationFormat and date time, which returns a string. It’s unfortunate to change that by returning an object on – or upgraded string, like Option 3.
+
+SFC: Yeah. I will jump back in a little bit here. Another sort of variant of Option 3 A, would be like the format function, but then rename it. If we do feel that there’s like people expect format return string, but this one doesn’t return string, format message or something, if the only concern with that is just the name confusion, like we can figure out different names. But yeah.
+
+RGN: Looks like I am in the minority here, but I don’t like Option 1, establishing an onError callback from the perspective of the language itself is just unnecessarily confusing with respect to control flow in cases where for example the callback itself throws an error or spawns an async call stack. I would rather not direct authors into that kind of thinking.
+
+RGN: New topic: A missing option here is one that is being considered in MessageFormat— sidestepping the issue by having separate functions, one of which throws errors and the other returns fallback strings, for authors to use as appropriate for themselves.
+
+USA: Yeah. Thank you. I would like to slightly disagree with what you just said, RGN, because I think that’s not really going to help all that much. If you think about it, if you always have to call the error checking variant before knowing for sure if you would have something nice from the format string variant, then it kind of comes back to Option 2. Because then instead of a try catch, you have if statement. It’s sort of – yeah. Like, or possibly a worse version of Option 3, where you have to, like, take destructure the result and sort of check for the error. My personal preference is one or three. Greatly more than two. I think the code sample that you showed SFC explains exactly what’s wrong with Option 2. I think it’s unergonomic for most cases to wrap all the usage of a function in try catches. On the option 1, versus Option 3, I personally find the first, the more JavaScript solutions. JavaScript is more comfortable using handler callbacks basically. And Option 3 seems to be the more Rusty in a way, because it’s kind of like a result-type. I don’t think it’s a bad thing. I just think it's less often the case in JavaScript codebase CIS.
+
+TKP: Yeah. I prefer the Option 3. Because it’s kind of like in the functional style. You have one side that is true and the other side that is kind of falsy or narrow. But besides from those libraries that are functionally inspired, I have not seen this in the wild. And speaking of that, I kind of like the second option. And because this is asynchronous operation, but you always have to do this on a user input that is kind of async-ly handled. And if
+
+TKP: Then this whole thing just finishes. Of. So, yeah. I would prefer option EAO? So, you throw that in promise, you will have to catch it. And then this whole try catch thing just finishes. So yeah. I would prefer Option 3 than 2 for that matter.
+(switch writers)
+
+EAO: As the author of the current language, Option 1. It’s the least solution. One part is that it’s – providing a default behavior of issuing a warning. For example, threw the console or what might be appropriate to the environment which is how it’s being run. One aspect I'd like to highlight is that we need to keep in mind that `Intl.MesageFormat` like the other intl formatters doesn’t have a format method that needs to of the behavior. We have to format to parts method. Which is – returning an array of parts. So with that approach, the Option 1 has the same pattern, as with the example here. Where it’s my `message.format` to pass and the error after that. Option 2 for that one we need to have probably a different error that we are throwing for format and format to parts. Because the message that is formatted has a different shape or value, just in that error the fallback message needs to be a string or array of parts, depending on how you got called and how it came about. Option 3, we would presumably end up with something like the RegExp result of a match. So we have an array of parts as the value and that array has an errors property on it. And then with option 4, we would – well, we need to have actually 4 functions instead of the current 2. Format, format, for the March… that makes it very cumbersome to use in practice. My preference here is with Option 1
+
+PST: Okay. Did you consider instead of like, all this, to a constructor option? You created – – Patrick. Okay, so that is a naive question, and did you come there through like we have a constructor option so when you create a format you say that the error is handled. Is that making sense?
+
+SFC: So the errors and things like that can beheld from the construction, and it could be an option in the constructor that could be like you know like you want to throw for an exception or have a throwback value. I don’t see how that would be for the over opens and that could be a way and that would be like an option two, can you turn off the swelling with the constructor option.
+
+PST: The code that is creating the formater is not the code that is not using the formater so that can make sense with that possibly?
+
+SFC: I don’t think that I have brought that up before and that is an interesting one and I will think about that one. And so there is two options in this meeting and that is good. And I am glad that we are having those. How are we doing on time?
+
+RPR: You have got about 7 minutes. Remaining before being summarised.
+
+EAO: So what you mentioned does make potential sense, but it has the cost of distancing the error handling code from where it actually happens. So for example if the same message is formatted multiply Tylenol times with different multiple holders input from different places and one of those throws an error, it will become difficult to figure out where did this error come from. And in either some code that is handling it or otherwise. This is, I did consider this when writing originally the ending up having that on error being definable in the format method column itself and that made more sense assumption it more appropriate scope and location for the error to be handled correctly.
+
+ACE: (I like that with option 1 the runtime can see when no callback was provided and can console warn on error. Unlike option 3.) Runtime does not know if you can check errors and gets into something like unhandled promises where it is like if no errors has been read before it's garbage collected and that is way more complex than it just doing that based on a simple check. That said, parts of me does not love the callback because of potential performance reasons but all things considered I like option 1. JGT: I want to put in a vote for option 5, and for a few reasons. And one is for me someone who is much more familiar with other parts of the API and message formatting and it is safe to say most of us work in highly internationalized often larger companies where globalization is very popular. Most developers in the world don’t do that with smaller companies and smaller budgets and hour goal should be that the bar as low as possible for those developers to enter the word of internationalization of languages and messages and so actually think by making it more consistent with the way that other parts of the Intl API works and to achieve that goal and when I think about the mechanics of other API works and everything happens on the constructor and the format method is providing the data that you want to format and nothing else and so whether or not is the correct one long-term is one established into the Intl API and if we want to make this accessible as many developers as possible and we need to make this as constructor opens because that would be more familiar, and the other thing to respond to what was said before is that – sorry, that it may actually be an advantage to put it into a different place because sometimes or oftentimes when developers are localizing things, the developer building the library that is used inside of a company for doing localization is a different developer than the person that is actually calling to do the formatting and it is likely that library author is a lot more conscience and so this might be a positive to put it into the constructor where you might have a more experienced developer thinking that through.
+
+So that is it.
+
+USA: Yeah to respond to that, there is a few differences between message format, and other formatters in some ways and this can be one of them. For most of the existing formatters when it comes to utilization and you are giving some options to help the implementation pattern and in the format you apply that for the value and only thing that would go wrong with that point that the value that you provided had some varies. However, for message format, initialization is the parser where it takes the message that also parses that message, and you know that is not from a small set of values, so it could fail with that step. And in the formatting step, it sort of interpolates and does all of these values. So the problem is that, well with that on the queue, and there is different values happening at these stage and basically, while I appreciate centralizing the error handling, it would actually be less ergonomic because that would confuse developers at what step is the error happening to begin with.
+
+SFC: How are we doing on time?
+
+EAO: Quick note on that because message format allows the message structure a placeholder that are formatting numbers date times and other things and these end up calling other formatters and furthermore message formats will allow values that is variables that is only set in the format call, it means that when we are calling `messageformat.format` or dot format part question need to construct another informal formater that can be used because the options for that other informal formater is not known until that time and the construction errors don’t occur with message format during the construction but sometimes during the dot format call.
+
+SFC: I am in the queue and I have this entry, and I know we are out of time but I want to raise to the delegates that I have heard a lot, we have spent a lot of time or reentrancy and code that can user code and spec code has a lot of challenges and I know we have focused own temporal and we have had like the conversion to primitive and discussions about how to avoid reentrancy and there is a lot of delegate that is in favor of the reentry option which surprises me but if that is okay, I think you know, it is about time to wrap this up, and I will say that if this was – if it was not evident that we can continue that discussion and my conclusion and I will write a conclusion for the notes. The conclusion since I believe is that I think we are mostly in agreement that option 2 is not the direction we want to go and there is additional options that is 4 and 5 that was raised and I have heard support of 1 and 3, and it sounds to me that there is like a lot of intra contact see here that we have not fully considered. And it does not sound to me that like anyone in the plenary is strongly objection to any of these options besides option 2 and I do hear RGN about option 1 and reentrancy because he is on the queue but that is something we can bring back but the direction will go to me and proposal for stage advancement. You know we will in fact call and participate in GitHub if you want to be engaged and I appreciate all the engagement, and I appreciate everyone and we are out of time. So thank you so much. So thank you, Shane, and we are on time there. The next item was going to be import, and we are waiting for chris to wake up to share that one so instead we will share with the proposal review/scrub for 30 minutes. And so Dan, who was going to be – are we going to run that one? This proposal scrub? Excellent. So, we will move toe that as the next item. It says 60 minutes but we will do 30. We have got until – for a 20 minutes break so we got until 5 to the hour. So we can get 35 minutes, it is the same thing.
+
+### Summary
+
+The Unicode MessageFormat specification emphasizes error handling with two types of errors: message errors (handled during message processing) and resolution errors (detected during message formatting). The challenge lies in determining how to handle resolution errors in the `Intl.MessageFormat` API. Three options are proposed: 1) an error handler callback function passed into the format function, 2) an expressive error object thrown by the format function and caught by the developer, and 3) an expressive return value with additional fields for inspecting errors. Each option has pros and cons regarding error handling, fallback message availability, and data-driven exceptions. We seek feedback from the committee on which path to take.
+
+### Conclusion
+
+- Potential concern about reentrancy in option 1
+- Agreement that option 2 is undesirable to most delegates
+- No strong objection to option 3, but majority of delegates prefer something else
+- New options 4, 5, and 6 were added which will be considered by the champion group
+
+## Proposal Scrub
+
+Presenter: Daniel Ehrenberg (DE)
+
+- (no slides)
+
+### Legacy Regex Features
+
+- [proposal](https://github.com/tc39/proposal-regexp-legacy-features)
+
+DE: Legacy Regex features in JS.
+
+DLM: SpiderMonkey had a plan to work on this.
+
+MLS: Different implementors had different plans from one another.
+
+MLS: I don’t think we will add and I don’t think we are unify is probably the best way to say it and so it is not editing.
+
+DE: When possible but not unifying seems quite unfortunate and there is many things in the web platform there is a lot of legacy going on.
+
+MLS: And so if the implementation say last match, and another mutation does not and that will need to add.
+
+DE: Not necessarily because in the past a lot of things were resolve by some sort of problematic combination filling in things and taking away some things. And so I mean, overall, this can only progress if someone wants to take on the work, otherwise we will withdraw the proposal and more clearly document that this is just not being worked on I think.
+
+PFC: In test262 the implementation status is that and JavaScriptCore, GraalJS, and LibJS are 100% compliant, and Hermes passes 19% of the tests.
+
+DE: Okay what about V8 and SpiderMonkey?
+
+PFC: It is not showing them here.
+
+DE: They fail all the tests?
+
+PFC: I will look at the results.
+
+NRO: It is just that the website is not showing that. They are rebuilding their data. So I don’t know
+
+DE: If somebody brought to you on the action item to follow up on this proposal and learn more about this implementation status.
+
+RGN: MM is not in the call right now, but I can get him.
+
+DE: I think in the past mark was hopeful for this because it made some object regularity properties getting met but he did not get to personally work on this and I want to suggest a deadline for abandoned proposal like this and after two years of soliciting Champions and move it to abandoned proposal and I would prefer a so list Ted Champion and this not completely abandoned over multiple years. What do you think on handling cases like this?
+
+MLS: So again I think MM was the one that brought this up, and I just wanted to unify what exists in the wild. But I think what as Dan said, it is going to be hard to get everybody to like all of these things.
+
+DE: Sound like maybe you (JSC) already do. So, right. So like there is a lot of unifying on to itself even if it is for an imperfect thing. It interests compatibility among engines.
+
+CM: I think a little bit of historical background on this was that MM just in general likes making things be consistent everywhere, but the real driver of this was Claude Pache who is not a TC39 delegate, and MM was sponsoring it so Claude could work with the committee on it. We have not heard from Claude in ages and I think the thing to do in this case is have MM get back to him and find out what is going on.
+
+DE: For next steps I want to proposal if this proposal continues to have no volunteers in a year, then we move it to the kind of withdraw or inactive proposal category like a year ago we discussed it last and can we vaguely agree to that on the next step and the further withdraw would be a step that would require their consensus?
+
+DLM: Seems like a reasonable plan to me.
+
+MF: Yeah. Okay I would like to know what the role of the Champion is at this stage in the proposal process?
+
+DE: I would suggest that the Champion is the primary person responsible to respond within the next year and hopefully this would bring this to the committee than scrub process and the reason for scrub process we don’t –
+
+MF: So by bring to committee, you mean like pester engines during plenary?
+
+DE: If they cannot convince engines that is a good idea, they should withdraw it and they can contribute patches to engines propose them and they can decide to withdraw it and make changes and make it more attractive to engines. There many different things that Champions can do.
+
+MF: I really don't think that's the onus of the champion here. I think if we as a community are going to withdraw it, it's going to be because an engine says, not only is it not prioritized, but they are no longer interested in implementing it indefinitely. And then we can think about withdrawing it. Until then, there are going to be things that are higher priority, things that are lower priority, it doesn't mean they're not Stage 3.
+
+DE: This is distinguished from all the other recent stage 3 proposal because all the other attract implementation interest and we should not put things at stage 2.7 or 3 if it is not a great idea and this has been a stage 3 for a long time and let’s make I reflect the current entry. Anyway, I think I get the sense if someone then you can confirm. If somebody make a PR so spiderMonkey that implements that, will you be welcome to accepting that?
+
+DLM: Yes, definitely.
+
+DE: So here we will start the clock counting down for a year. And people are very encouraged to make PR’s against engines to implement this to rescoping the proposal that will attract more interest or even to just make the case why this is valuable and maybe beyond the reasons that mark has made in the past. So that is the conclusion I want to report for this.
+
+SFC: I mean I think it is not about the signals that we are selecting and the signal we are sending to implement and this thing is like on a ticking down time bomb, and I think the best ways to get engines implement and reaffirm this is a proposal and we are a committee and so we will report it back to stage 3, and like you know, I mean allow these proposals are just a matter of prioritization and can we get head count and implement this and so forth but it is a stage proposal, and I tend to agree with Michael on this one and I have this particular proposal and it was erroneously to stage 3.
+
+DE: You know we tried to do that a year ago and we said this is stage 3 and we would like to see follow up on this. One thing that people can do here is explain to everyone why they think this proposal is a good idea and maybe that is persuasive and do you want to make that s sense? Or Michael do you want to make this case? I explained why I think this is a case and why people coming to semantics will increase the program. Maybe we can go on to the next topic. I would like to record a conclusion what the next steps are and Michael or Shane do you have alternatives to next step to propose. I think the committee as a whole owns it to some extent.
+
+MF: I wanted to not put a time limit on finding a Champion. I think that it is valid, everyone still wants it, nobody's made an argument against it, it's just that it is not as high priority and we keep doing additional work that jumps in front of it, that's okay. I don't want a conclusion that is we're going to withdraw it based on lack of implementation, especially since we do have an implementation. We're obviously as a whole working toward it. And we've just heard today from SpiderMonkey that they would be open to also accepting an implementation.
+
+SFC: It seems to me that if you know implementers think this proposal is not worth implementing, and then they will come to us like they came up to with temporal and the proposal is too big and we are having challenges, like figuring out we can implement this successful and it seems like if that is the problem from – we think there is essential issue with implement and sending feedback.
+
+DE: We just heard from SpiderMonkey and I don’t want to find this – I am fine with many people on the committee find that valuable and implement this proposal and multiple engines do not find it motivating to implement it to themselves. And is that an accurate conclusion that we can agree?
+
+SFC: I misunderstood DLM's comments I thought this is something that is still – this is not a priority but it is not like something that like you know especially deprioritizing and it is still on the queue to implement it, right? I feel this is probably the case for V8.
+
+DE: DLM, I think it is accurate that you are not prioritizing and that is what I want to record in the conclusion? You are deprioritizing because you see the lack in it and this is a small task.
+
+DLM: And I have not looked at it since we last discussed and when I reviewed it, then, it did not seem like a high priority for implementation. And it is not like we were ever implement it but it is not something – yeah I guess like you said we very much welcome if someone else wanted to contribute implementation but I cannot see it with current resourcing why we would spend time on it and I guess in this case, JSC have implemented it and if V8 implement it, we would do it but we are not motivated to implement it ourselves.
+
+SFC: The Intl proposal is all in SpiderMonkey and does not work with FireFox or mozilla and if these we wanted to motivate it. And it is great that we have making these contributions and maybe we don’t have someone doing that for WebEx, maybe that is a good next support. So as a committee are willing to come back to this and if it does not continue to not get multi-implementor interest we can revisit but we don’t need to make that decision today. Is that a conclusion we can agree on?
+
+??: Yes.
+
+DE: I do not want to spend time on but not leave proposals abandoned on stage 3 because that is a commitment that we are going to do something about this proposal and try to see it through.
+
+#### Summary
+
+- DE argued that this work is important to have cross-engine alignment on all edge cases
+- Agoric is in favor of this proposal to support SES, but they can get by with the world in its current state
+- No browser is prioritizing this work highly
+- SpiderMonkey would accept outside patches to bring it up to compliance. V8 might as well.
+- JSC raised concerns that they might not be interested in implementing the features that they don’t already have (though they seem to pass the tests already)
+- No timeline for considering demotion/withdrawal; the proposal remains at Stage 3.
+
+### JSON Modules
+
+- [proposal](https://github.com/tc39/proposal-json-modules)
+
+DE: For JSON modules and are these already implemented in multiple engines? Maybe these are just ready to go to stage 4?
+
+NRO: This is implemented in V8 and JSC – plan to propose this for stage 4 together wit import attributes, in the next few meetings. As soon as we have an answer on whether we keep the `assert` or not.
+
+NRO: Chrome has shipped removing `assert`.
+
+DE: That seems quite optimistic already and the proposals are lumped together.
+
+LCA: We have contributors on both of these, and so this is not dead and pay attention to.
+
+DE: Okay awesome, and thank you for your help, and these are gradually shaping as well. And explicit resource management and we are hearing from leader this meeting and
+
+#### Summary
+
+- JSON modules is smoothly moving along towards Stage 4, and will be proposed for advancement soon, once `assert` is unshipped.
+
+### Float16Array
+
+DE: float 16 array? Is that being implemented and shaping in browser?
+
+DLM: We are attempting to ship in the next release of SpiderMonkey, and are aware of that V8 has started their implementation as well.
+
+DE: Great that sounds quite healthy.
+
+#### Summary
+
+- Under development/shipping in multiple browsers
+
+### Decorators and Decorator Metadata
+
+DE: The status with decorators overall, including decorator metadata, is that we have tests just out for review in test262. https://github.com/tc39/test262/pull/4103 I encourage people to help contribute to this testing. I am not aware of any decorator implementations in engines or others. PFC do you want to clarify on decorator testing?
+
+MLS: Decorators is stage 2 right?
+
+DE: Decorators are Stage 3. So now that they are tested, I hope that they are becoming sufficiently interesting engines to implement them.
+
+MLS: Do decorators need to be changed to stage 2.7?
+
+DE: If we did not have tests for this meeting, we could consider retracing it to 2.7. That’s not necessary since we have tests out for review that look pretty comprehensive and more than 700 files.
+
+NRO: To clarify, there are already some tests for decorators. When we created Stage 2.7, we did not notice the decorators should have been considered 2.7. I was planning on proposing decorators for 2.7 but I talked to one of the champions (Kaitlin Hewell Garrett) and she had a test262 pull request ready. As far as I know implementations were waiting on that. The tests should land soon, so we don’t have to reconsider the proposal’s stage.
+
+DE: Do people have concerns about the state of decorators? It is a bigger proposal.
+
+#### Summary
+
+- Decorators are at Stage 3 and now have test262 tests (under review)
+- No concerns expressed about the decorators proposal
+
+### `JSON.parse` source text access
+
+DE: JSON parse source text access is gradually shipping in browsers, right?
+
+DLM: we do have an implementation with SpiderMonkey and it is still Nightly only and I will check on that. I’m aware that V8 has an implementation as well.
+
+RGN: That is my understanding and there are tests right now but I would like to flesh them out before proposing advancement to stage 4, probably at the next meeting... but regardless, it is still very much active and in implementation.
+
+#### Summary
+
+- Multiple implementations in development, shipping now or soon
+- More test262 tests to be written
+
+### Regex Expression Pattern Modifiers
+
+DE: Regex modifiers. I think this is going along well. I don’t know if anybody has more to add for status. This proposal allows inline changes to certain regex flags.
+
+MLS: That went to stage 3 two years ago.
+
+DE: Right, so that is another one where we might have considered retracting it but at this point there is no reason because the tests landed.
+
+MLS: Yeah and we have not started this one.
+
+DE: Right, is there any implementor who is looking at this?
+
+DLM: V8 has an implementation that in our regex library that we use. I have been looking at patches this week and I will turn it on.
+
+DE: Okay great.
+
+#### Summary
+
+- Test262 tests landed
+- Implemented in V8, which should flow into SpiderMonkey soon. No JSC plans yet.
+
+### Sync Iterator Helpers
+
+DE: Okay so, sync iterator helpers and this has been implemented in the script engine right?
+
+DLM: So I am manning – these have been implemented as well in SpiderMonkey so this is not on my list to ship over in the next month or two but I don’t know about the other implementation status.
+
+DE: One thing in V8, is anybody from V8 here?
+>> Was there a web problem with this that was resolved?
+
+ACE: I think they have started to ship in V8 because we have different platforms at different levels and one started to ship `globalThis.Iterator` with the others and so where we deleted the Iterator and so to me it shipped to V8 so we would delete it.
+
+DE: This seems rather optimistic to get to stage 4 in the next few meetings.
+
+KM: I don’t see an option for it. So I am guessing it is implemented but I am not sure.
+
+MLS: Yeah I don’t understand this either.
+
+#### Summary
+
+- Beginning to ship due to previously accepted web compatibility fix
+
+### Explicit Resource Management
+
+DE: Resource management is coming out in some engines, right? Is it under development?
+
+DLM: In SpiderMonkey’s case we have a volunteer who has started on this and being mentored by a team member and it has been started and I believe V8 has stage in testing in this so I assume the implementation is underway.
+
+DE: Do we have test262 for this proposal?
+
+MLS: I believe we do.
+
+PFC: There is a pull request under review from the champion, and under review for V8.
+
+#### Summary
+
+- Implementations in V8 and SpiderMonkey in progress
+- test262 tests out for review
+
+### Source phase imports
+
+DE: Source phase imports, for WebAssembly modules, how are these doing in terms of implementation?
+
+CZW: The work on V8 is under review.
+
+NRO: That is great, and even though we decided that the test262 were not necessary for this proposal, CZW is adding syntax tests which is a good idea.
+
+### `Function.sent` metaproperty
+
+- [proposal](https://github.com/tc39/proposal-function.sent)
+
+DE: This is a Stage 2 proposal, adding a “metaproperty” allowing you to see the first thing that the generator was “primed” with. Maybe that is more composable than using a return value of yield, which ignores the “primed” value.
+
+DE: Stage 2 means that we expect this to become a part of language. We hope to eventually get active development towards the proposal otherwise we can consider it moving it to stage 1 or withdrawing and Hax, did you expect to work on the future?
+
+HAX: (from matrix) Regarding `function.sent`, we have discussed use cases and possible solutions in several past meetings. However, there is no consensus on whether the use cases are strong enough to support introducing a syntactic solution. Although I generally still believe this is a problem worth addressing, perhaps solving it through a more general feature like function decorators in the form of an API is more promising than introducing entirely new syntax. Therefore, I hope to revisit this proposal after the function decorator proposal advances to the next stage.
+
+#### Summary
+
+- HAX plans to bring this proposal back with a different syntax if function decorators advance
+
+### Throw expressions
+
+NRO: RBN was working on this just a couple of months ago. There were some discussions about syntax issues like right now a couple of weeks ago a request in the syntax changes was reopened. That was one of those closed and then we have this recently, so maybe there is something going on.
+
+### Function implementation hiding
+
+- [proposal](https://github.com/tc39/proposal-function-implementation-hiding)
+
+DE: Functioning implementation hiding: this is about making toString not show the function source.
+
+MF: Yeah, this is still one that I do want to have advance. But the last time it was presented, it had opposition by Mozilla. I don't know if their position has changed. The other dependency that this proposal moving forward was error stacks, which is also, I believe, JHD talks about frequently. And I believe he also has interest in that. So I guess I would like to hear if Mozilla still has a position on that, or if you can recall the position.
+
+DLM: I am not sure offhand, I have to review the proposal.
+
+DE: Switching from facilitator to participant: I had expressed concern about this because there was some initial hope that it could help save, for example, memory usage by allowing implementations with byte code to discard the original source. But in practice, bytecode may be larger, so it’s the thing to discard.
+
+MF: So there were some people that were hoping for that, but that was never a claimed motivation? Well, I mean, it was claimed, but then it was not part of what was presented most recently.
+
+DE: I see you already removed that. There was sort of a question about what the claimed purpose of the proposal was.
+
+MF: I still consider this a security problem.
+
+DE: Do you tend to follow up on this?
+
+MF: Yes, I can follow up on this. I hadn't been hopeful that Mozilla's opposition would change, but maybe we'll talk about it again and see if there's a chance of that. And if there's not, yeah, I think withdrawing it would be the right public signal for this proposal.
+
+MLS: It has been some time that we talked about and there is some concern about adding directives (“use …”)?
+
+MF: Those are definitely another dimension of concern but that is a more superficial one and there is some fundamental ones about this not being a good idea generally.
+
+DLM: I was not able to fully describe Mozilla opposition but if something along the lines did not align because of the security module. So not important to do it for the language.
+
+#### Summary
+
+- DLM to follow up on Mozilla concerns
+- MF to bring proposal back to committee, explaining motivation and pursuing advancement
+
+## `Promise.try` for Stage 3
+
+Presenter: Jordan Harband (JHD)
+
+- [proposal](https://github.com/tc39/proposal-promise-try/issues/15)
+- no slides presented
+
+JHD: Wonderful. So hello, everybody. You may recall from the last couple meetings, `promise.try`, the test262 are merged and it’s implemented in a number of engines already, although behind flags, and I’m not actually seeking Stage 4 today, obviously, because I need achieve Stage 3 first and give a little more time for browsers in particular to implement. So I’m hoping that that will be an easy Stage 3 given that it’s all ready to go. Yeah. Are there any questions on the queue before I actually ask for advancement?
+
+CDA: Nothing on the queue.
+
+JHD: All right, well, then, I’d like to request Stage 3.
+
+CDA: Dan Minor supports Stage 3. I support Stage 3 as well. Plus 1 from RGN. Plus 1 from Duncan McGregor.
+
+MLS: I support Stage 3.
+
+CDA: Awesome. Plus one from Michael Saboff, Tom Kopp, also plus one, so sounds like pretty clear you have consensus.
+
+JHD: Awesome. Thank you.
+
+CDA: Do you want to dictate a summary and conclusion for the notes.
+
+Statement from SYG who was not present:
+
+> V8 has no concerns for Stage 3.
+
+### Summary and conclusion
+
+`Promise.try` has merged test262 tests and has received consensus for stage 3.
+
+## RegExp Escaping
+
+Presenter: Jordan Harband (JHD)
+
+- [proposal](https://github.com/tc39/proposal-regex-escaping)
+- no slides
+
+JHD: Yeah. Awesome. Okay, this one won’t be quite as brief. So I’m here today to look for stage 2.7, spec reviewers have signed. One of the editors has helped. I don’t remember if kevin has been made officially a co-champion or not, but he should be if he isn’t already because he has written a lot of the spec text. And I don’t remember if -- I don’t think SYG has signed off yet, but I’m happy to make the advancement conditional on that. The -- everything is great. The spec text is in there. And there’s just one open question that would need to be resolved before we go first -- before we advance this proposal to Stage 2.7. So currently we’re using character escape sequences, so, for example, if you have an at sign, it becomes a backslash at. RGN raised the point that if there was ever a time in the future when we wanted to make backslash at mean something other than just a literal at sign, then it would be better for `regex.escape` to use hex escapes instead of character escapes, means, like, backslash X something or other, I believe. And the -- so that’s sort of the -- my understanding of the impetus of RGN’s question is let’s be forward compatible -- like, let’s not close off the design space. Kevin and I would prefer to keep the character escapes because human readability, even though this -- the output of `regex.escape` is not super human readable, readability is still a spectrum and more readable is still better than less readable and character escapes are more readable. And on top that, the -- it’s unclear that the design space is even still open to change the meaning of these things, because if somebody is already using, for example, a backslash at in a regex, I think it wouldn’t be web compatible to make it mean something different anyway. But either way, the text to do that -- the spec text to do that is relatively trivial. So my hope is that today we can make a decision about that question. And then regardless of the outcome of that decision, advance this proposal to Stage 2.7 with the decided semantics. And I see that rGN has some extra context he wants to provide as well. I’ll jump to the queue, if that’s all right.
+
+RGN: Clarifying example that actually kind of proves the point, which is that the -- the at sign is valid after a backslash only in legacy mode regular expressions. It’s actually invalid in unicode, you know, U and V flags. I think that is the kind of thing that very well could change over time, and especially if we introduce yet another Unicode mode, which is plausible. None of that baggage exists if we do the generic hexadecimal escapes, which is a big motivating factor for why I think it’s the better solution. We can live with either, but I do have a pretty strong preference for reasons like this to just do all escapes the same way.
+
+CM: (Clarifying question) are we talking specifically about \@ or is that just an example?
+
+JHD: Yeah, that’s just an example. The backslash at is just a convenient placeholder.
+
+CM: Yes, that’s what I thought. But I wanted to make sure people were clear on that.
+
+JHD: Thank you for the clarifying question.
+
+MF: Yeah, I guess, so I'm very doubtful that we could get away with changing behavior of `\@`. I'm not convinced at all by Unicode mode regexes not allowing them because it's not like regexes got auto upgraded somehow to use the u flag, they still exist without the u flag. I'm not talking about regexes that are written today, I'm talking about ones that exist in code written a while ago. My opinion still is that for this proposal we should have just hex-escaped or Unicode-escaped everything, but given that we're not, that we're doing other escape sequences, then I think we should just keep this, we shouldn't change anything.
+
+CDA: MLS agrees with RGN. And that is it for the queue.
+
+JHD: So I assume, Michael, that that agreement is fine with either strong preference for hex escapes?
+
+MLS: And the reason is that both the U and the V, they disallow escapes for the possibility they be used in the future, so I don’t want this proposal, and the regex escape to create something that is legal now and illegal later.
+
+JHD: Okay. Yeah, I mean, it’s certainly clear that making the decision in `regex.escape` will close off that design space. Okay, so now I’m at sort of the same place, which is we’ve heard two people now that have indicated a strong non-blocking preference for hex escapes, and Michael and Kevin and myself have a preference for character escapes. And not that it’s a numbers game, but I don’t know if it’s worth maybe doing a temperature check with the question being, like, positive would be keep it as, you know -- all the positives would be keep it as is, and the negatives or the confused, something like that, would be change it to hex escapes, and indifferent is if you’re good with either one. Is that a reasonable request?
+
+MF: Yeah, and I could probably do this if I read the spec text right now for myself, but could you clarify if we switched the escape sequence for these punctuators to hex escapes, what escape sequence, like what classes of escape sequences we would be using as a whole?
+
+JHD: Sure, NRO, could you switch to the spec text. Okay, so this is the main -- there’s basically two chunks of text. This is the main method. We go through all the code points. We -- where is it? If it’s -- if it’s a decimal, digit or ASI letter, then we use the hex escape. Otherwise we jump into the abstract operation below, which is where punctuators are considered. I’m probably going to summarize this inaccurately as I talk, but you can look for yourself and see exactly what it’s doing. Yeah, so punctuators were considered and it’s using -- where is it? There are a few hex escapes and then there are a bunch of character escapes, is how I’m reading this. I’m little tired, so…
+
+MF: Are you talking about the later ones, the Unicode escapes?
+
+JHD: Yeah. It’s -- sorry, it’s hard for me to have this all paged in with the spec text, but the -- let me reread it and try and pull in my understanding. So if it’s -- it says if it’s a white space or line terminator or is a leading or trailer surrogate that uses a hex escape, and then everything else is just the UTF16 encoding. That’s how I’m reading it. It’s -- I can’t -- my Zoom is sort of covering --
+
+MF: I’m just confirming this. The other escapes would be those -- the single character control escapes, like T, N, V, F, R.
+
+JHD: Right.
+
+MF: Okay, then I’m neutral on whether we use hex sequences for this or single character escape sequences.
+
+JHD: Okay. Yeah, so I guess if everybody who has not spoken is neutral, then a temperature check would be a waste of time. So I just kind of -- like, I see kind of two people, then, myself and Kevin, on the side of keeping it as is, and two people on the side of using the hex escapes, but nobody has expressed a blocking constraint in either direction. So my -- yeah, so, like, my inclination is to do the status quo if that’s the result, but since we don’t know for sure who is neutral, I think I agree with RGN’s comment, I’d like to do one and just get a confirmation of that.
+
+CDA: All right, we’re going to do a temperature check, we need to make sure that folks have TCQ pulled up. I know that’s -- we’d hope everyone already has it pulled up, but if you don’t, it’s important that you do, because it only -- it only -- it will not show up if you join after the temperature check is placed on the --
+
+JHD: Yeah, so I was going to say, as soon as I see the emojis, so I can be precise, I will dictate for the notes and for the room who each emoji represents.
+
+CDA: Okay, great. Clarifying question from Bradford.
+
+BSH: Yes, I’d just -- I think I might have misunderstood something. Are we saying that if we leave it as it is right now, where you escape at with backslash at, that means you can’t use the result of `regex.escape` in a regular expression if that regular expression is going to be used the U or V flag? That seems bad, right.
+
+RGN: Can I answer this question?
+
+CDA: Yeah.
+
+RGN: So the current state of the proposal is that there are two classes of punctuators. In one class, you have things like dollar sign, which are valid in all current regular expression modes as character escapes and regex escape outputs it as backslash dollar sign. In the other category, you have what’s on the screen now. A character such as at sign is actually subject to hexadecimal escaping. If in the future backslash at becomes valid for Unicode mode regular expressions, the output of regex escape presumably would not change, so it would logically be in the same category as dollar sign, but in output would maintain its distinct category.
+
+BSH: Okay. Thank you.
+
+RGN: But, you know, to answer precisely the question that was asked, it is semantically acceptable to use the output of regex escape in either mode regardless of how this issue gets resolved.
+
+CDA: That’s it for the queue for now. Temperature check. I hope everyone heard my call for making sure you have TCQ open. I trust that that has allowed people enough time to go ahead and do that. And I’m going to pull up the temp check interface now. Well, not now, but in a moment. Please refrain from casting any votes until JHD has thoroughly explained the meaning of each option.
+
+JHD: So I’ve just typed in some explanations. Oh, yeah. So I put with there’s no semantic meaning really between the emojis and the actual explanation, but if you put the strong positive heart, then that -- or the unconvinced, then that means you have a preference where you’re blocking on where -- or you insist on your preference. If you put the thumbs up, then you’re saying -- oh, now switch -- that you want to keep character escapes but it’s not a blocking concern. If you put the question mark, the confused one, that means you want to switch to hex escapes but you’re non-blocking as well. And if you put indifferent, it means you’re neutral, and then nobody click the eyeballs. We don’t need that.
+
+CDA: SFC clicked the eyeballs.
+
+JHD: And I appreciate all the indifference from being different enough to click the emoji. That’s always helpful to have some signal.
+
+JHD: Yeah, nobody had expressed a blocking preference in either direction, which -- so I’m glad this is confirming that.
+
+CDA: You’re in a dead heat otherwise.
+
+JHD: And, remember, Kevin has a thumbs up as well, but he’s not awake right now. But, still, it’s close enough that we could kind of make either call, so I’m just kind of waiting to hear if there’s more pressure in one direction.
+
+CDA: We’re going to call it. I can capture a screen shot for the notes. Yeah, you’re neck and neck.
+
+> Temperature check results:
+>
+> - keep character escapes, block on hex escapes: 0
+> - keep character escapes, non blocking: 3 (omits a +1 from SYG who was absent)
+> - n/a: 3
+> - switch to hex escapes, non blocking: 3
+> - neutral: 19
+> - switch to hex escapes, block on character escapes: 0
+
+JHD: Okay, well, then in that case, I’m going to ask for Stage 2.7 with the status quo, which is character escapes.
+
+CDA: Yeah, plus one from Daniel Minor. Any other explicit support for Stage 2.7? Support from RGN. Also from Daniel Ehrenberg. Also from Michael Ficarra. All right. Hearing nothing else, you have Stage 2.7. And on that note, would you like to dictate a summary/conclusion or and conclusion for the notes.
+
+JHD: Sure. So `regex.escape` has achieved consensus for Stage 2.7, keeping its current semantics of using character escapes instead of hexadecimal escapes.
+
+CDA: All right. And we’re going to move on to your last topic is --
+
+RPR: Shall we have some applause? There’s been some stunted efforts.
+
+Prepared Statement from SYG who was not in attendance:
+
+> V8 has no concerns for Stage 2.7.
+>
+> As for character vs hex code escapes, V8 can live with either outcome but weakly prefers character escapes. The future stability argument AFAIU is that choosing character escapes makes changing the behavior of character escapes in the future even harder. But it is already very hard to change non-throwing behavior to new non-throwing behavior. We don't understand why this would make it meaningfully harder.
+
+### Conclusion
+
+Keep character escapes (no change). Advances to stage 2.7
+
+## `Error.isError`
+
+Presenter: Jordan Harband (JHD)
+
+- [proposal](https://github.com/tc39/proposal-is-error)
+- no slides
+
+JHD: So I am now going to discuss `Error.isError`. So as a refresher, error instances are the only language built in instance that does not have a way to brand check it. All the others have, you know, array has `Array.isArray`, but all the others have some sort of prototype method or accessor that, you know, by way of doing something else does a brand check and throws or returns some special sentinel value, and it is very useful when debugging to do brand checks and kind of know for sure what you’re dealing with so it can help the human doing the debugging try to figure out what they’re actually dealing with. Similarly, when you’re doing serialization run kit, for example, which is this, like, trainee link on every single MPM package link, they need to be able to serialize values safely and reconstitute them describe them for the user, and the -- with -- they do not -- because there’s no brand checking mechanism, all they can do is a best guess, and you know, they have in the past expressed when they were -- when run kit was part of stripe and when stripe was an eqi member expressed this desire for checking errors. This need is still there. And also structured clone, which is in browsers and node had a special behavior for native errors. It does this brand checking. So there’s no way to know in advance if it’s going to do this. You kind of have to try it and see and that’s unfortunate. So for all these reasons, I’m proposing a way to brand check errors, which at that point now has a shape, which is just a predicate, `Error.isError`. And if you want to jump to the spec text, Nicolo. It’s very, very simple. The method just calls an abstract operation, and the abstract operation then describes, you know, it has to be an object with an error data internal slot, and I do have this proxy stuff in here. But I am in no way attached to it. It’s -- I sort of just copied and pasted the spec text forever go from `Array.isArray`. So if that horrifies anyone, let’s just rip it out. I might rip it out even if it only horrifies me. But also please let me know if you want it to be there for some reason. But these are all things that can be decided within Stage 2. And I’m here today to ask for Stage 2.
+
+CDA: I got a number of things --
+
+JHD: Oh, yes, then, sorry, I’ll elaborate as well since this was brought up in a previous plenary. There’s a question about what happens with DOM exemptions, and regardless of the specification mechanism for it, it seems pretty parent that DOM exceptions would immediate to return true from this predicate as well. If it would be really confusing for users if something from the platform versus the language -- like if an error from the platform versus an error from the language behaved differently, so I’m fully in support of ensuring that HTML is able to integrate this and -- which is why it’s an abstract operation as well. That’s part of why it left it that way. Is able to integrate this and use its check -- and cause DOM exceptions to return true for the predicate. Whether that’s done with a host hook or internal slot is sort of, again, something we can figure out within Stage 2 or maybe in 2.7. Yeah. So now let’s go to the queue.
+
+CDA: All right. MF, although it might have been answered.
+
+MF: Yeah, I feel like you kind of jumped the gun on that one. This is relaying a comment from KG. KG left me with two comments. He is, I believe, sleeping now. The first one is just on DOM exceptions. Sounds like we're on the same page. Have you spoken to anybody in the HTML world?
+
+JHD: I spoke to SYG about it, and my recollection is we were all on that same page. If not, I mean, that’s something that we would need to address, and if someone filed an issue pointing out that I think web IDL doesn’t have the error data internal slot right now, but, like, whether we need to add that slot to web IDL or HTML or do something else in this proposal so that it, you know -- that HTML and 262 can talk to each every, either way the, outcome should be that, you know, host exceptions and language exceptions both are -- host errors and language errors both are considered, like, is error.
+
+MF: Yeah, and I guess technically, it’s a Stage 3 concern in our process.
+
+JHD: Right.
+
+MF: But, you know, it would be nice to not have to get there to find that out.
+
+JHD: Yeah, and I’m happy to put up a -- like, a draft PR as early possible, even before 2/7, to make sure the HTML folks are on that page as well. But, yeah, I wouldn’t want the proposal to advance without that being -- like, to advance too far without that being worked out. So I’m very vested in making sure that that happens, invested in making sure that happens.
+
+MF: Another comment I’m relaying from KG says that he’s still not convinced by the justification for this proposal. But it’s not worth being a sole objector for Stage 2. But if there are other people who are similarly unconvinced, he would like to stand with you in objecting to stage 2 for this proposal. And I guess since he’s not here to do that, we work for the same member company, so I would do it on his behalf.
+
+USA: I don’t know if Michael, you were before me in the queue. But, okay, well, I would say that I really like this proposal. Thank you, JHD. I hope that this could also work for sub classes irrespective of most things, but that’s, you know, just a detail. One question that I had for you was that why -- or sort of why doesn’t this proposal talk also about checking the error type? Like, you know, once I know if something’s an error, I would like to go further and see what kind of error it is, or I might, so what about that?
+
+JHD: Yeah, I mean, I think once you know it’s a built-in or, you know, like a built-in error, like, host and or language, then at that point, you should have the ability to check the constructor, check the name property, and so on. Those things are all forgeable. But if you -- there’s not another mechanism in the spec, like, so part of my argument for `Error.isError` is there are ways that native errors are treated differently currently. But I don’t believe there are ways that the type of the error is treated in the same way. And so I would, you know -- an alternative is that we could have something like `Error.isError` and then have type `Error.isError` and so on, but that’s adding, like, a bunch of extra methods for something that doesn’t currently have a use case, and it’s also something that in, I guess, I’m not sure if that could actually -- that probably couldn’t be added later because I think type error extends error. But so if that’s a consideration I think within Stage 2, that we can make adjustments to account for that if desired, it would require, I believe, adding an internal slot somewhere on -- in all the, like, native errors and aggregate error and what not to track the kind of error it is. But I’d have to do a lot more research, because, you know, what dOM exceptions do in that regard.
+
+USA: Right.
+
+JHD: And it’s also unclear how sub classes would work. So for sub classes, like, they will just work if they call super, just like a sub class of any other built-in. You have to call super to be a proper sub class and get all the slots, so, you know, there’s not, like -- it’s not clear how, like a typeError, for example, that calls soup or the error would be indicating to error that its type is type error.
+
+USA: Sure.
+
+JHD: So I’m happy to keep exploring that, but I’m not proposing it yet because I haven’t done the research, nor has there been expressed story before this.
+
+USA: Okay. That’s understandable.
+
+JHD: Thank you.
+
+CDA: MF
+
+MF: Brand checking is icky and we shouldn’t do it.
+
+JHD: Yeah, I mean, my response to that is, it’s fine to have that on the record. It’s fine to have that opinion. But we already do it everywhere.
+
+MF: Okay. My next topic is an error stacks proposal. We talked about this a little bit online, but I just want to bring it up here in plenary. Last meeting, I had asked whether this proposal is obviated by the error stacks proposal, and I think you confirmed that it would be able to do what you’re trying to get out of this proposal. And I asked that we then try to pursue the error stacks proposal and see if we can make progress on that. And, failing that, I would be okay with this proposal moving forward. And I didn’t hear anything from you about trying to pursue that further, so what can we do about that?
+
+JHD: Yeah, I mean, I had some more discussions with SYG about that, and the -- so there’s a few things. First of all, the error stacks proposal would provide the capability, but with none of the ergonomics. That would be, you know, acceptable because brand checking most things is not ergonomic, so, like, fine, whatever. I just need the capability. But it’s still nicer to have it be ergonomic in this sense, like a predicate. But I spoke to SYG about the stacks proposal, and it seems like the V8’s position is roughly the same, that they don’t yet see the value of partial interoperability, that they still want all or nothing, and all is a sort of "boil the ocean" level of difficulty that I have not had the time to do research on, nor has anyone else stepped in to do it, to help. So while I still would be interested in seeing the stacks proposal advance, and I still believe that it should land as-is and a follow-up should specify the contents, it’s been six years or something, and nobody has been interested enough or had enough time to the research required to navigate that obstacle. So I think it’s -- this feels like a mistake that we should have rectify before symbol string tag shipped in browsers, and it’s now been like eight years or something and we still haven’t done it. I don’t see the point of waiting on stacks anymore.
+
+MF: On the other hand, how pressing is this? Are you seeing developer demand for this? I think it’s –
+
+JHD: It’s not newly pressing. Yeah, yeah, it’s certainly not newly pressing, but the -- I mean, how many more years do we want to wait?
+
+MF: Some? I don’t know.
+
+JHD: I mean, it’s been six to eight years. Like, I think -- like, I think wait and let’s see how error stacks goes, that was the original response to me given eight years ago. Is it eight? Yeah, a while ago. And that’s why I pursued the stacks proposal. I mean, regardless, I wanted stacks to be in the language, with you be you I had chose the champion it specifically because I wanted this capability, and I almost was there, and then a new constraint was added that prevented me from making any progress for many years. So I think I’m, like -- I don’t think it’s reasonable to ask to wait any longer for error stacks or to, you know -- to boil the ocean or to navigate that constraint.
+
+MF: Understandable.
+
+CDA: RGN.
+
+RGN: I think it’s well-known that Agoric are sensitive to issues of membrane transparency and the interaction of proxies with other aspects of the language. For this case in particular, because the nature of errors is to generally communicate diagnostics, this exception to the general rule is actually justified, and in practical terms, doesn’t violate membrane transparency. The behavior actually makes a lot of sense, and so if you do keep it as-is, that totally works for us. The proposal in general, just on a personal note, does feel well founded, being able to detect an error instance is valuable, if not the most important thing in the world. So no objections from us.
+
+CDA: Rob.
+
+RPR: So there’s a -- there a prepared statement from SYG on this. It’s -- so these are non-blocking concerns that V8 would like to be resolved during Stage 2. The first of which says `Error.isError` should return true for DOM exception and its sub classes. Today DOM exceptions aren’t true sub classes of 262 error, but they have `Error.prototype` on their prototype chain, and they have special stack tracing magic that 262 errors have. It would be confusing for the web developer if `Error.isError` didn’t consider them actual errors. The easiest way to accomplish this is to make DOM exceptions real sub classes. But this is technically observable in case one deletes `DOMexception.prototype.symbol.toString` tag and then does `Object.prototype.toString.call` new exception. V8 is optimistic this is a backwards breaking change that browsers can make because who’s out there deleting `.string.symbol` delete tag. I’ll pause there in case you want to note something in response to SYG.
+
+JHD: Yeah, just I mean, we talked about DOM exceptions. I agree with SYG’s statement, that `Error.Error` should return true for DOM exception and sub class. SYG is also correct that the only way one can determine that they aren’t subclasses is doing the symbol string tag mutation and a very small number of JavaScript developers on the planet have probably thought about that. So I agree with him as well, and I’m glad to hear V8 is optimistic that that is a change that could be made, because organize arguably that’s how DOM exceptions should have worked in the first place. And as far as -- yeah, I’ll let you continue.
+
+RPR: And then the second concern is this method will return false --
+
+NRO: Yeah, when we did some changes for abstract module source and imports this proposal and also the changes for trusted types, V8 brought up that it’s much better to do this type of, like, cross like host 262 checks, not structural C++ sub classes with internal fields, but a hook you would pass, again, non-object to the host and then the host will tell you whether it’s an error or not. And so, like, we tried to follow that, and maybe set precedence with this two proposals. And maybe we don’t need to change the prototype of the exception if we go through that path.
+
+JHD: Well, DOM exception already had `error.prototype`, and the only change needed is there’s some checking in object prototype to string that checks for the internal slot. So if we -- there’s a few paths that this could be achieved, which can be worked out in Stage 2 and 2.7 and in the hTML PR. Either DOM exceptions can be given that slot and then the change SYG is describing would happen implicitly, or if a host hook is preferred, then this abstract operation on the screen would check the internal slot and also would check the host hook, and then we could, if we wanted, also in this proposal change object prototype two string to check the host hook alongside the error at the internal slot. Either -- and that’s an optional -- the object prototype two string change would be optional in that regard. Either way is fine. The slot feels to me, but I don’t feel that attached to it, and if we want to go with a hook approach, that’s fine. The end result would be the same, more or less.
+
+??: Unless you’re deleting the symbol string tag and doing object to string call, that it would be DOM exceptions will be indistinguishable from sub classes of error with or without this method. That’s, like the goal.
+
+RPR: And then the -- the second concern from SYG is that this method will return false for user subclasses that are shipping today that forget to call super. Interested in the committee’s thoughts on this.
+
+JHD: And it was pointed out in Matrix, I think Dan said that the -- you the forget to call super, it will throw unless you go out of your way to return an object from the constructor. And hopefully we can all agree that that is an obscure and niche and rarely used facility in constructors. So I think that that’s correct, because if they forget to call super, they aren’t sub classes. They’re just things trying to sort of pretend to be. I mean, that’s how the brand checks work with everything else in the language that already exists. So I don’t see that as a -- personally, as a conflict. But I can go to the queue if others have a different opinion.
+
+CDA: There’s no reply to that, but I’m just noting we have less than ten minutes. Mathieu.
+
+MAG: So when I have talked to people, I get mixed opinions on whether or not `Array.isArray` seeing through realms is a good thing or a bad thing. You know, it -- what do we feel about this? Like, this is another instance of a cross realm brand check, and I think this would bring the count from one to two, right?
+
+JHD: Well, all brand checks are cross realm. This a proxy piercing one, which should bring the count from one to two. Like, it’s -- like, `Array.isArray` will also return true for a proxy to an array, and it pierces the proxy to do that. That is unrelated to whether it’s same or different realm. And so the spec step 3 here in that abstract operation is the equivalent step in is error, and -- so that’s part where I think people are typically -- either this is good, this is bad, or this is weird, or I have no idea what it is. It’s one of those four reactions. And --
+
+MAG: I mean, no, there is also an aspect of this that is, like, realm specific, though. Where it does matter if you add, you know -- you could conceivably write this in such a way that says, you know, make sure that it was constructed in the same realm. And --
+
+JHD: Yeah, you could. But that’s not how every other brand check in language works. They are cross realm, and that’s an important feature of this proposal. There’s not only one cross realm brand check. There’s zero same realm brand checks right now.
+
+MAG: Because you’re counting things like `toStringTag` as other ways?
+
+JHD: I mean, there’s the internal slots there, but `Number.prototype.valueof` or `string.prototype.toString`(), if you call those on a box string object from another realm, they will work. And not throw, because they’re doing a cross realm brand check. Internal slots are also cross realm, and that’s the mechanism being used here.
+
+MAG: Okay, I’m done.
+
+JHD: Thank you.
+
+CDA: Yeah, we just have a couple minutes left. I want to allow some time for summary and conclusion at the end. DMM
+
+DMM Just sorting out my mic.
+
+CDA: Okay.
+
+DMM: I support taking this to Stage 2. I think the cross realm case is actually extremely useful. At the moment. We have perfectly good ways to do this down at the engine level, but not idiomatic ways to do it at the script level, and we’d like to depend more on running things in separate realms and being able to check properties like this easily from a script that has been given a value. So, yes, I strongly support it.
+
+CDA: All right. I am a +1 on the queue. Current state of duck typing is finicky, so like anything that moves the needle on this to make it a little easier. DE is on the queue.
+
+DE: So I wanted to comment on what was raised a while ago about the error -- relationship between the error stack proposal and brand checks. So I think -- I think having predicates for brand checks is generally a good idea, and I’m glad that we have it proposed here. What I’ve been a little bit more uncomfortable with is when making a new proposal, that introduces new kind of object that does have a brand, some people in committee who want brand checks to happen have been asking champions, please make sure you have an operation that happens to check the brand as a requirement for the proposal. It’s a very weird middle point, because we both can’t have an operation that explicitly checks the brand, like `Error.isError` style, because some people object to brand checking and can’t not have an operation that checks the brand because of this actually directly critically requirement, so we should definitely a make a proposal -- make a decision as a committee about which way we want to go and not make these directly contradictory requests of proposal authors. This is one of the kinds of things that makes it complicated to champion a proposal. So hopefully we’re making that larger decision here about, well, about are we, you know, requiring or prohibiting brand checking for things. So the relationship between this check on is error and error stacks is because error stacks would be a way of kind of sneaking in a brand checking operation for errors, which is a weird thing for committee to ask for people to do, because it only makes sense if you’re trying to solve for that particular contradiction, where both requiring and prohibiting brand check operations at the same time. So, yeah, I also think the primary benefit of error stacks would be not enabling that brand check, but instead having more interoperability about the way errors work and the way you program around them, which is I guess what had been asked previously from browsers. Thanks.
+
+>> Thank you.
+
+CDA: All right. I think those are some really important comments from DE, which I agree with. We are right at time. But I do want to just briefly give the folks on the queue, if you could be very brief, please go ahead, MAG.
+
+MAG: Yeah, I just wanted to echo this, that, like, when we make decisions, we should be better about writing them down and then sticking to how we decide this, and having less ambient, you know, the committee -- parts of committee X, participants of committee find Y, we find compromise in the middle and re-evaluate it every time we come back it to.
+
+MF: I’m not comfortable committing to what DE suggested and saying this is precedent setting. I think for each of these brand checks that we have admitted, that we have asked the champion to individually justify them, and I think we should continue asking that other objects be introduced that could be brand checked and need to have a justification for adding a brand check. Now, some of us feel differently about whether this justification was sufficient, as I said earlier, my colleague KG feels that it was insufficient. But I think the committee as whole has come around to allowing it, but it shouldn’t set precedent for other types of objects.
+
+CDA: Dan.
+
+DE: It’s fine if we say that this -- we’re not today deciding that everything gets a brand check always. But I would like to leave the parties that strongly disagree with each other with the action item to get together, talk it over, and develop a proposal for common path for future tC39 proposals rather than the current thing, where we’re both required to have brand checks and prohibited from having unjustified brand checks, because that is a weird and contradictory state that actually nobody’s asking for, it’s just sort of going through the maze between people with differing opinions. So can JHD and Michael and other people who also have strong opinions about this, can you commit to working together on this?
+
+JHD: Yeah. I’d love to talk more about it.
+
+CDA: Yeah, I think that’s a great suggestion. Thank you, DE. All right, we are past time. JHD, could you -- well, I think actually we need to -- I think you want co-to call for consensus.
+
+JHD: My question is can we have Stage 2 for this with the understanding that DOM exceptions will be considered errors and HTML integrations will be worked out as soon as possible and slot hook can be resolved during Stage 2.
+
+CDA: I support Stage 2.
+
+CDA: You have plus one from Daniel Ehrenberg. Plus one from DMM. Plus one from Chip.
+
+JHD: Thanks, everybody.
+
+CDA: All right. So you have Stage 2.
+
+NRO: Don’t forget to ask for reviewers
+
+JHD: Anyone like to volunteer to be a spec reviewer? Don’t all step forward at once.
+
+CDA: ACE has volunteered, SRV has also volunteered. I think that’s all you need for now.
+
+### Conclusion
+
+- advances to Stage 2
+- There are still concerns in the committee about brand checking
+- DOM exceptions and all host errors will be considered errors by this predicate
+- integration with HTML will be pursued as fast as possible
+- this does not set a precedent for brand checks one way or the other
+- action item: interested parties will have further discussion and try to agree on a consistent design principle around brand checking moving forward.
+
+## `Promise.try` and `Regex.escape` addenda
+
+RPR: Just before we switch back to Nicolo. I need add things to the notes for the previous two things before this, before is error. On `Promise.try`, SYG had a prepared statement saying that V8 has no concerns for Stage 3. And then on `Regex.escape`, likewise, V8 has no concerns for Stage 2.7. As for character versus ex-code escapes, V8 can live with either outcome, but weakly prefer character escapes, and the future stability argument as far as I understand is that choosing character escapes makes changing the behavior of character escapes in the future even harder, but it is already very hard to change non-throwing behavior to new non-throwing behavior. We don’t understand why this would make it meaningfully harder.
+
+## Deferred import evaluation for stage 2.7
+
+Presenter: Nicolò Ribaudo (NRO)
+
+- [proposal](https://github.com/tc39/proposal-defer-import-eval/)
+- [slides](https://docs.google.com/presentation/d/1EjV6QbT4bvcOdWj-gCLwP5fcEWRfewzbrI3vOI11LA8/)
+
+NRO: So what is this proposal, it allows you to import modules while skipping evaluation, and only evaluating them when you actually need to read their exports. The syntax look like this, which is
+`defer` keyword, and this import actually doesn’t execute that until later when we will actually reuse it. And reminder that the proposal only supports deferred imports with namespaces so that evaluation happens on property access and not on binding access. What is the motivation? As apps get larger, the cost of the evaluation becomes significant. And so being able to skip this can give some noticeable advantages.
+
+NRO: Last time this proposal was presented it was blocked on the top level await semantics. We went through the alternative options again with the interested parties is and we concluded that the proposal should remain as is. What does as is mean? Well, when you do an `import defer`, we need to eagerly pre-evaluate with sync parts of the module graph so that the later execution can happen synchronously. So `import defer` doesn’t completely skip everything, it’s just the best for optimization. What does this mean for best practice, it would have this at the top and dashed arrows representing deferred import -- and the entry point, we need to look for its dependencies and in the case, the dependencies need to be evaluated would be the eager dependencies, so the one on the right, and the sync dependencies of the one on the left. So we start evaluating them. And then later when we actually trigger evaluation of deferred module, some of its dependencies will have been already evaluated and we only evaluated the missing parts. In practice, this means an import deferred statement is equivalent to splitting it into multiple imports. One when we first link everything and individual imports if it’s a synchronous dependencies. The other potential approaches that were considered were (a) to disallow
+`import defer` from a module that has async dependencies, and the problem with this is it makes the two features completely incompatible. And the other potential approach was to disallow `import defer`
+from module a has async dependencies not yet evaluated, so if you import the async dependency first, you can defer the rest. And the problem is that maybe somebody else is importing your async dependency and you don’t really notice. So maybe it works and then you remove some other dependency and it stops working. Actually, Ashley, if you want to set up the board, you know that you experimented with this approach in the past.
+
+ACE: Thanks, NRO. Yeah, so Bloomberg, we kind of have a vision of the hypothetical future of this, because we were in a position for many years where we had both asynchronous modules and also a way of synchronously importing modules. And what we found was a variety of things. So one, both these features were used. And problems arose. Either something would be synchronously importing something, and that would work fine because either the thing they were synchronously importing was not synchronous and it was fine. Or the thing they were importing was asynchronous, but something else had already load it, so it wasn’t an issue that they were synchronously importing it. With both of those situations, in, the future, those things can change, so either the thing that was previously importing the asynchronous module allowing the synchronous import to work would stop, they would stop having dependency or maybe it would happen later, it would happen in a few ticks later, and suddenly now that synchronous import would explode, and from their point of view, they did no, they didn’t can change their dependencies, change their timing, but it would work or month work, or an asynchronous module was introduced and then it would suddenly stop working. So this didn’t put people off trying to use synchronous require. They wanted to the try and implement -- you know, in common a case, laziness, they’re trying to import those things when they’re really sure they need them because they hit the control flow that needed it. So people started being very defensive. They started adding lots of imports to their module, not because they need those dependencies, but because they suspected they needed to import these things to make their later synchronous imports work. So you had this kind of odd coupling and then with saw, and this on its own isn’t very pleasant. You have this weird import of modules that aren’t to do with this module but we also saw other bad things arising on this, and people were importing too many thing at the top. They were importing more modules than just the asynchronous ones or they were importing the synchronous ones and then the asynchronous ones stopped being asynchronous and it’s -- and it’s very marred to know who your subdependencies are. That the subdependency tree is changing and the state of that module tree is constantly changing, as other people have imported, have they not. So for us, when we were building the kind of engine mechanism more this solution, we went for this module where the module graph knows which things asynchronous and it will do that. So when it -- when the code is synchronous importing, we think the module kind of system is the best actor in this whole game to know exactly which thing should be imported. Not the person asking for the dependency. Thanks, NRO.
+
+NRO: Thank you. So, yeah, so why are we doing this proposal and not just dynamic import. I should have had this slide earlier. Dynamic import is great. It’s actually better in many cases because it can avoid skip much more, such as loading, and the problem is it forces you to make the code asynchronous and it’s much more difficult to adopt. And dynamic import has been there for, maybe six years now, and still, we’ve seen that there have been problems with adoptions due to these changes for the whole codebase. So we’re looking for something that’s easier to adopt. And this is where import defer comes from.
+
+NRO: The top level await semantics can cause complexities, and it’s not clear which import statement gets evaluated. And the module graph is already very complex. It’s not easy to figure out why some application is being included in your bundle. If you can skip it in some way, whether a module is asynchronous or not, and there are many tools to answer these questions, like webpack, for example, tells you what is being imported and why and how much it waits on your bundle size or, for example, it prints a whole tree of all the modules imported and which of those are imported multiple times and how much they weigh on your application. But there are -- with this proposal, there are other questions that you might want to answer. Such as how much time does evaluating a module take. So is it worth the try to defer it, or maybe using import defer somewhere and it is not being deferred as expected. So we believe browsers devtools can provide some help with answering this question. We have a prototype for how this can be done. This is a small app with import defer from some modules that affect the allotted start of time. And let’s see. And, like, we could have this type of dev tools maybe somewhere in the performance panel where we request see, for example, main has these dependencies and some of them are being deferred and it’s triggering evaluation of this module and this module, then we see, oh, actually, there is something that’s here, so we check what this module and we see, okay, this module has to evaluate and it's imported through this deferred chain coming from main.js, and then it took some time to evaluate, and maybe later there was the evaluation of something else and this deferred execution was caused by this point in the app. We believe browsers can -- well, we believe dev tools can help even though import defer adds more complex space.
+
+NRO: Okay, so there are some changes since this proposal was last presented. We found some bugs within the proposal. One of them was rated to reentrancy. So the current module evaluation assumes that it’s the synchronous part of the evaluation is not reentrant. So when you start it, there is no module in your dependency that are currently being evaluated. And the previous version of the proposal was breaking this assumption, so specifically if we have a module graph like this where we have a cycle containing a deferred edge, so in this case, when you want to eval this graph starting from 8, A moves to its evaluating state and then B, and then we assume that before finishing relation it triggers evaluation of C, and C becomes evaluating and C becomes evaluating and when T goes to check its dependency, it finds B in an evaluating module and this signals that there’s a cycle. When there’s a cycle, we transition all the modules in the cycle, and we evaluate all the modules in the cycle and transition them to an evaluated state. So in this case, C would transition to evaluated, and then we cannot transition B evaluated yet because the evaluation of C completes in the mean tool of evaluation of B. And so, like, for example, what if B throws after that, the evaluation of C has been triggered. Then D would have a dependency and be there a successful state. So due to this reentrancy problem, we decided to prevent you from evaluating a deferred module if that has cycles with modules that are currently in their evaluating state. In this case, trying to evaluate C would be an error before performing an evaluation. And this is the same approach for the `require(esm)` of implementation in Node.js. Actually, we found that, for example, evaluating this implementation, these -- this Node.js was cousin V8 to crash.
+
+NRO: And the second change is that now this disallows reads from modules that are evaluating are in the stage. And this might have thrown, might not throw, depending on whether the access was triggering the evaluation of the module, and in that case you would find the error and you see the property from the module, in that case, you would maybe for the value, if the value was defined before if error in the module or maybe get a Temporal that was a variable twinned after the error. So now to prevent this raciness, this difference that’s difficult to predict, whenever the axis properties from the defer of a space of a module that cannot be successfully evaluated, so it throws or that already in the past, it’s always going to throw an error, so this is guaranteed to throw. There’s a consequence of this that, well, existing in spaces do not behave like that. If you have today a space of a module that threw and you can get that through cycles, then you would access -- accessing properties would not necessarily throw. So each module -- in order to do this, we had to create two main space objects per module. Name space objects can be created lazily only when needed. So that one can have this throwing behavior. This means that the identity of this deferred space and eagerness space is not the same. We have complex specs and they reviewed the changes. Editorial reviews are ongoing. Thanks, Michael, for already providing your feedback. And to the queue.
+
+KM: I guess my question, and I understand in the last slide why should we allow a module to be loaded but deferred and not deferred? Or is it just like you’re trying to get around -- you’re trying to show it in a localized case but it’s different modules.
+
+NRO: You have two different modules, maybe two different levers with the same shared dependency, you cannot control what the other lever is doing. If you have, like, your module A tool libraries B and C and B and C have a shared dependency C, maybe C would want to import C and then import D and then in your specific case, you have C and D is not the actually deferred but for other consumes or B, it might be deferred.
+
+KM: Okay, thanks.
+
+RPR: This a prepared statement from SYG, V8 has no blocking concerns. And then a separate question for the room, how do other implementers feel about PR43? Is this a correctness foot gun? To people use name space objects as map keys?
+
+NRO: The thing that SYG is concerned about you put X1 in the map and you try to get it out with x2 and it doesn’t work.
+
+RGN: Can you repeat that. I don’t think I understand the association of this pull request to module name space.
+
+NRO: Per this pull request, X1, X2 would be the same object. Maybe with one in the map, you could get it out with the other variable. And now there are two different objects so you cannot.
+
+RGN: One of those objects exist already and one of them would only exist via this proposal.
+
+NRO: Yeah, it’s not a backwards compatibility concern. It’s not about, I guess, maybe two people tried to put them in a map and use them as keys.
+
+DE: These are relevant for non-implementers too. So I -- I can’t really think of why you would care about the identity of these things matching. The important thing is that the underlying module has the same identity, and it -- I would imagine that it might simplify implementations if they -- you know, the spec has a common path, a common sort of class for import and import defer modules, but unifying those, I don’t know whether that makes sense in actual implementations or not. It might be easier if these are separate, even.
+
+CDA: I accidentally advanced the queue. I guess somebody else clicked as well. So I think we’re still on the previous topic. There’s a reply from JHD.
+
+JHD: Yeah, just can you help me understand why this solution necessitates two distinct namespace objects?
+
+NRO: Yes, in this first example, let’s assume we also have NS2, which is a name space for this object, but that’s obtained through, like, ES6 methods. So you have this NS2 that’s a name space for module that throws that you obtain not through the proposal. Accessing NS2 will not throw an error. This in practice is not confusing because it’s very difficult to get a handle to if name space object of a -- and you need to do a data cycle and then in the cycle, leak name space, for example, like, [INAUDIBLE] global object. However, with import defer, it’s much more common to get access to name spaces for modules. That might not have been successfully evaluated because you don’t need cycles for that anymore. You have the direct import defer statement. Given it’s more common, it would be good if they consistently throw rather than throwing in some non-deterministic way whether the module has been deferred. So they -- it would be good for them to have two different behaviors. Or well it would be good for the new one to not follow the behavior of the old one. And so they need to have two object identities.
+
+JHD: Is there no possibility of getting the desire throwing behavior without creating two distinct objects?
+
+NRO: So the problem with that is that when you have the name space of a module that is being evaluated, in fact, like classic module, you don’t know yet whether it will throw or not. So it will be unfortunate to first let you -- get properties are it while it’s being evaluated and market it as, okay, now it’s failing and you cannot get properties from it anymore. We considered that approach. And, like, this current evaluation.
+
+CDA: Luca.
+
+LCA: I also find this unfortunate because I don’t know, it’s a nice property. This looks kind of weird. I understand why it’s done. I think it makes sense. I sort of agree with it. But I don’t know, I just want to say it’s weird. I find it weird to have two name space objects for the same module.
+
+DMM: I -- oh, because I’m in person. I’m trying of get my head around the precise details of this. Are we requiring that imported name spaces and deferred imported name spaces are always distinct or only they’re distinct during that stage where things are unresolved? If I have already imported -- if I have resolved the deferred import, will I need to keep two copies of that name space in some way so that they can be checked or can I, once the import is over, discard one of them and just return a single one?
+
+NRO: I think it’s an important property that once you observe the identity of two objects, that doesn’t change, so if they start being different, they don’t become equal later.
+
+CDA: All right, was -- I couldn’t tell, was somebody still speaking? No? Keith.
+
+KM: Yeah, I also don’t necessarily think it’s blocking. I think it’s kind of weird and unfortunate as other people said. I guess I’ll go on my next topic, which is I guess why couldn’t implementation sort of treat the deferred name space, like, if anywhere in graph -- I guess you wouldn’t know up front, but if anywhere in the graph has a deferred module, like, under the hood, treat it like a TDZ access and then you just at the time you access the name, like, in the same way that, like, array length works right now, where it’s actually a hidden getter, and it will, like -- if it sees the empty value for that slot, it just goes and calls the thing. Does that make sense? I don’t know if that makes sense.
+
+NRO: Do you mean that the error, is, like, access NS or property access NS.
+
+KM: When you do property access on -- yeah, on NS. Right?
+
+NRO: Can you ask that again.
+
+KM: Can I -- what, sorry? Do you want me to say it again? If I understand the problem space correctly, and maybe I just don’t understand what’s happening also, so -- but so the issue is that, like, some NS throws when you go to evaluate it, right? And you have -- it’s not so much, like, when you access `NS.foo`, instead of being a separate name space object that has, like, different properties, if the module hasn’t been evaluated yet at the time the temperature happens, like, the engine under the hood, for all the namespace properties that it wants, it makes them, like, hidden getters, right, in the same way array length is a hidden get more all implementation and if value was never filled, it goes and evaluates the module, right? And everybody still sees the hidden every loader version -- and if anybody wants to strictly load the module in the future, they fill in all the hidden values on the same name space object. You only ever have one name space object. Is does that make sense?
+
+NRO: Yeah, that was exactly the version for this pull request. But, like, again, the problem there that if the module -- if you do `NS.foo` and the module -- hopefully it’s FU and then throws, you would either get the error or not, depending on whether somebody else evaluated the module or not. If the module is not validated yet, the getter will throw. Like, it will be FU and as part of its completion, it will throw an error, while the FU is already present, then it would have just threatened as this. And this means whether the error is observable or not is depending on code that you didn’t have control of, and then maybe you changed some other dependency and your module stops or starts throwing.
+
+KM: I see. I guess one solution, and I don’t know that this is possible, would be to have every module be in that mode. Like, in the deferred, like, state I’m describing with these hidden getters and you remember the error and you would just rethrow the error the second time. Or is that –
+
+NRO: That’s what this does.
+
+KM: But you wouldn’t have to have two different name spaces?
+
+NRO: No, you would have to change the behavior of name space objects, because name space objects currently do in the remember that error, like, classic name space objects.
+
+KM: Okay, maybe I just don’t understand the problem and I’ll have to come back later.
+
+SFC: Regarding the question of the object identity, it seems like this is the kind of tool that it seems like is useful for, like, taking an existing module graph and trying to make it a little bit more efficient, by, like, adding in these import defer statements. If that -- like, I feel like a property that should be upheld is, like, if I’m -- if I have a synchronous module that I’m importing, and then that -- that module that I’m importing changes between having import and import defer for its own dependencies, I shouldn’t see any observable change from my side in terms of, like, object identity or anything like that. I’m not sure if that is being upheld or not. Maybe you can clarify that. If that made any sense, what I just said.
+
+NRO: Yeah, so the -- if you have, like -- yeah, the dependency, the identity of the module you’re importing, then that itself is using import to import defer, I would not change. Like, what observe is if the module is reexporting the name space of its own dependency, then that has a different identity than before because now it’s that deferred name space.
+
+CDA: Ashley.
+
+ACE: Is this working?
+
+CDA: Yeah.
+
+ACE: In case this is useful for anyone that’s disgusted by the two separate identities, at for bloomberg and I suspect other non-native ones, if you’re emulating modules in user land, people with bundler or something similar to that, it’s beneficial to us slight optimization point view for these not to be the same object. The import defer is likely to be an import object and make a getter and setter and maybe a proxy and it’s going to have slightly slower property access, and this is very well optimized, whereas the direct import star in cases where it’s like a trivial name space, no cycles and things, that can be really, really simple object. So the fact that that tooling doesn’t -- is allowed for these two objects to be different is a kind of nice little performance win. I also like the consistent error semantics I like it from a semantic point of view and implementation point of view.
+
+CDA: Daniel.
+
+DE: So given all the skepticism about adopting the semantics in 43, maybe we can accept the slight inconsistency that is documented here in favor of on your avoiding the other inconsistency. Is there a big problem with that?
+
+NRO: I believe that’s up, like -- like, it’s observed very rarely, as in two property axis and nothing else. I would consider trying to race conditions to be more important. Unless somebody have some strong problem with this. And as ashy mentioned, as advantages, these are two polyfilled.
+
+DE: Right. I have to say, I’m a little bit surprised by the response, by the concerns about it, because I think the principles that Ashley is talking about would also apply to native implementations being easier. But if people view this identity as very important, maybe the -- it’s hard for me to understand how bad the race condition that you’re saying is. If we’re choosing between one or the other, given that we can’t go back and make it an error to read a property on the module object.
+
+NRO: Check box. We’re at time, by the way, so there’s still of a bunch of topics in the queue.
+
+JWK: Webpack generates different module namespace object for each import site, so if you depend on this, that’s already broken for webpack users. I don’t think it’s really a serious problem. (Note: only for ES namespaces generated for CommonJS modules)
+
+NRO: I’m surprised to hear this. I thought the web pack would crash the namespace object if you use it. But still, do you know if this caused any problem to web pack users?
+
+RPR: Yeah, Jack, the question was do we know if that -- if those multiple identities have ever caused problems for web pack users?
+
+JWK: Hmm. I don’t think there is one, but I need to verify.
+
+NRO: And, again, I’m surprised by this, but thank you.
+
+RGN: All right. Thanks CDA for doing to your best to create a cycle in TCQ.
+
+CDA: I was talking with somebody about this, a lot of people don’t click they’re done, and if a chair advances at just the right time, there’s a race condition and we both click the button and we end up advancing the queue past where it’s supposed to be. So another TCQ reloaded feature we are looking forward to not having that be a problem anymore. Please go ahead, rGN.
+
+RGN: Yeah, yes, it’s actually convenient for this. I want to offer some kudos for catching the reentrancy bug. And pointing out that this actually highlights the need for us to have general reviewer guidance specifically to look for them. Because it’s a subtle area, and it’s one that really matters for a lot of proposals. I do support the resolution in this case. I think it makes a lot of sense, and it looks good. That’s all.
+
+CDA: LCA.
+
+LCA: Yeah, while we were talking about the namespace thing, I do want to bring all third idea that I was talking to with some champions offline, that I don’t think we should do, but I wanted to bring it up for completeness, which is we could also say that none of the module namespace accesses throw ever, including the first one, if the module does not evaluate correctly with defer. So we would essentially just capture this error and it would never get exposed to user code and we just consult the error or something. And this is very weird, and probably fails in more cases, but, yeah, this is also an option.
+
+CDA: Hax. Oh, that could have deleted.
+
+NRO: I think the question was answered in Matrix already.
+
+CDA: Okay, great. Well, that’s it for the queue.
+
+NRO: Okay. So if there's nothing else, I’ve heard specifically in the pull request I’ve heard the opinions in both directions. Well, preferences in both directions. I would -- given that none of them is blocking, I would prefer to go ahead with this as proposed here. Because I believe it can -- it would cause less confusion, and not more. So, yeah, I’m asking if we have consensus for 2.7. It would be conditional on the spec editors finishing the editorial reviews.
+
+CDA: LCA.
+
+LCA: Yeah, I’m in favor of this going ahead for 2.7 as is. I think the proposed PR43 behavior is better than the pre-PR43 behavior, even though I think it’s weird, and nobody proposed anything better, so I think we should go ahead as is.
+
+CDA: All right. Just quickly scanning the queue, if there’s anything besides plus ones, and there is not. You have a plus one from me for 2.7, plus one also from Richard. Also from duncan with a comment, weirdness, this is better than race conditions. That is it for the queue. You have 2.7.
+
+CDA: NRO, can you can dictate a summary and conclusion for the notes, please.
+
+### Speaker's Summary of Key Points
+
+The proposal went ahead with its original top-level await semantics: `import defer` will eagerly pre-evaluate the asynchronous dependencies. In addition to that, there have been two changes since last time:
+
+- throw in case of reentrant evaluation (this was a spec bug because it was violating multiple spec assumptions)
+- `import defer` now gives a different namespace object than classic import namespace object you get from `import *`, so that property access can offer module evaluation throws can always throw rather than only throw if you happen to be the first one evaluating that module.
+
+### Conclusion
+
+We had consensus for Stage 2.7, including the two additional proposed changes.
+
+## GitHub Teams Notice
+
+JHD: I wasn’t here this morning, but the reviewing the GitHub teams, I can…
+
+CDA: Yes, please review GitHub teams, and actually, who was it? I’m wondering, there was somebody from last time that -- where we had discovered that their GitHub team was woefully out of date. Was it -- maybe I shouldn’t name then on the record.
+
+JHD: There’s a number of GitHub teams that seem out of date, so it would be ideal if you reviewed your own employer or member company’s GitHub team. And if you are not the point of contact for ECMA, then please poke that person to file the appropriate onboarding and offboarding issues. That’s all.
+
+CDA: Yes. I will say that it’s not necessary that you are the GA rep to do that. If you know that,folks have left your company and are no longer delegates, please don’t let that stop you from filing an off boarding issue. Great, we will see everyone tomorrow at 2 a.m. Chicago time. Thanks, everyone.
diff --git a/meetings/2024-06/june-12.md b/meetings/2024-06/june-12.md
new file mode 100644
index 00000000..9b770827
--- /dev/null
+++ b/meetings/2024-06/june-12.md
@@ -0,0 +1,1089 @@
+# 12th June 2024 102nd TC39 Meeting
+
+**Attendees:**
+
+| Name | Abbreviation | Organization |
+|-------------------|--------------|--------------------|
+| Daniel Minor | DLM | Mozilla |
+| Ashley Claymore | ACE | Bloomberg |
+| Jonathan Kuperman | JKP | Bloomberg |
+| Jason Williams | JWS | Bloomberg |
+| Waldemar Horwat | WH | Invited Expert |
+| Richard Gibson | RGN | Agoric |
+| Philip Chimento | PFC | Igalia |
+| Jesse Alama | JMN | Igalia |
+| Chengzhong Wu | CZW | Bloomberg |
+| Michael Saboff | MLS | Apple |
+| Duncan MacGregor | DMM | ServiceNow |
+| Keith Miller | KM | Apple |
+| Chip Morningstar | CM | Consensys |
+| Tom Kopp | TKP | Zalari |
+| Istvan Sebestyen | IS | Ecma International |
+| Sergey Rubanov | SRV | Invited Expert |
+| Samina Husain | SHN | Ecma International |
+| Aki Rose Braun | AKI | Ecma International |
+| Chris de Almeida | CDA | IBM |
+| Shane F Carr | SFC | Google |
+| Ron Buckton | RBN | Microsoft |
+| Mikhail Barash | MBH | Uni. Bergen |
+| Romulo Cintra | RCA | Igalia |
+| Nicolò Ribaudo | NRO | Igalia |
+
+## ShadowRealm Update
+
+Presenter: Philip Chimento (PFC)
+
+- [proposal](https://github.com/tc39/proposal-shadowrealm)
+- [slides](https://docs.google.com/presentation/d/1HxocWS0WfIZPanCAhsabSDOPx9LCjw6upMfMP9ogEqE/)
+
+PFC: My name is Philip Chimento, I work for Igalia. I’ve been doing some work on ShadowRealm in partnership with Salesforce. This is a very fast update. to let you know where we are and what is going to happen, in the near future.
+
+PFC: We last talked about this proposal in February at the plenary in San Diego. We described the criterion for including which web APIs to include in ShadowRealm, which I will summarize with the word confidentiality. And we have a PR open to the HTML spec, which is awaiting review from the HTML reviewers. Since then, we have collected feedback from these. We have heard feedback from the reviewers, and Mozilla about difficulties with WPT coverage. It would be more useful to have coverage of ShadowRealms created inside different types of realms, not just from inside the main JavaScript realm. For example, ShadowRealm created from inside a worker or whatever.
+
+PFC: We’ve heard this feedback, and more feedback is welcome. We would like as much as possible to have the HTML PR in a merge-able state as soon as we can. So that’s what we will work on, in the short term here, investigating ways to simplify the criterion of what web apps to include in the ShadowRealm to get rid of the confusion of what confidentiality exactly means, and we will continue to add WPT coverage, including specifically addressing the concerns that we heard from Mozilla. And that’s it. It was just one slide.
+
+PFC: I think DE already let me know he wanted to say something in the queue. But I don’t think he’s here yet.
+
+DE: [inserted later] I recommend applying the criterion, “ShadowRealms should contain the intersection of what’s in all conceivable environments”, which implies that they are missing everything to do with I/O [excluding import(), timers, DOM, etc.](https://github.com/tc39/proposal-shadowrealm/issues/398#issuecomment-1939418911) I agree with the set of things that are spec’d as Exposed=* that Igalia has put together--they seem to be following this principle already. We will need to document this design principle in the W3C TAG design principles document.
+
+RPR: That’s right. Dan is not here at the moment.
+
+MAG: I just—the thing I want to say is, thank you very much for all the work on trying to improve the WPT stuff. It is better. It’s in a much better state than it was. And we look forward to more work on this. But yeah. That’s about all I have to say.
+
+PFC: Okay. Thanks.
+
+RPR: There is nothing more in the queue. Any more comments or questions about this proposal? No? Okay. Then I think we can move on.
+
+PFC: All right. Thank you, everybody.
+
+### Speaker's Summary of Key Points
+
+- Since the last time we discussed ShadowRealm, we’ve been collecting feedback from HTML reviewers and Mozilla, but would still like to hear more from other parties.
+- In the short-term, we’ll focus on addressing the confusion around the “confidentiality” criterion, addressing Mozilla’s concerns about WPT coverage, and merging the ShadowRealm HTML integration into the HTML spec.
+- DE recommends adopting criterion, “ShadowRealms should contain the intersection of what’s in all conceivable environments”
+
+## Sourcemaps Progress Updates
+
+Presenter: Jon Kuperman (JKP) and Agata Belkius (BEL)
+
+- https://github.com/tc39/source-map
+- [slides](https://docs.google.com/presentation/d/1H6nu-Q0FllP2rsnCRxepiB_iBgsA0TMba5FGntDL5fg/)
+
+JKP: I am Jon Kuperman. I am presenting today with ATO and the TG sourcemaps. I requested a longer session, opposed to my normal 0 to 10-minute update is that we’re essentially doing a lot of work in our small group and wanted a chance to dive deeper and get feedback on plenary on the things we have been working on. This update, we are hoping to cover is about our constituencies and process. Go through a list of the specification fixes that we have implemented. We have gone to TG1 for a rubber stamp. These are fixes we have merged into the draft of the specification, but have not come to you all asking for a rubber stamp approval. So not in the official approval. I have a specification question to get approval on. Talking about the three new feature proposals and their current status and switch to A—so we build a validator for validating sourcemaps generated by and a test suite to other things that are applying those things and handling edge cases and I hope not it take the whole time to get some general feedback at the end.
+
+JKP: So the really high level updates, what I would have done normally yesterday, so this test suite we have been working on, alongside with people at Bloomberg and Igalia, and Mozilla, has merged into the Firefox devtools, the sourcemap repo and soon merged into chrome devtools. We needed a license, which we did yesterday. We added to the testing products and if you are in the area, we have a hackathon two weeks, June 24 and 25 in Munich, in Google office. A lot of us will get together and work on upping the test counts, implementation of the new proposals, all sorts of fun stuff. That is open to everybody, if you like to join, sign up on the slides or talk to me.
+
+JKP: So yeah the goals of the update, under a year ago, I had come to plenary and asked for consensus of forming the test group. We were coming to the end of this year for a rubber stamp on the specification updates and one thing that I’ve been trying to do is find the right balance, not it take a lot of plenary time with sourcemap specific information every 2 months, but also nervous about doing too much work in a silo without communicating it well enough and I don’t want there be surprising or any wrong paths we don’t figure out until too late. I want to give an in-depth update and get feedback.
+
+JKP: So a few just the way we have been thinking about sourcemaps and we have all this in the documentation in the TC39 sourcemaps. We have been thinking of the constituency in three terms, generators, debuggers and error monitoring tools. The generators would be tools that write sourcemaps to disks. These are like bunned letters, and transpilers, anything generating a `.map` file. Debuggers would be browsers but also a lot of these standalone debuggers. SHN said, replayIO are joining TC39 and has a cool standalone debugging application. Those don’t generate the sourcemap but read it and provide debugging experience. And the third are the error monitoring tools. Sentry is joining TC39. So they do things like wrap the error object in your application, and they keep track of real user errors and they also use sourcemap to help you with track traces and figuring out where the errors came from.
+
+JKP: So I did a presentation a few plenaries ago on the process. We have a multistage process and they start off and move through via consensus vote. This gets us to our internal stage 4 and comes to plenary asking for approval and messaging it into the official annual spec. The process is pretty similar to the TG1 process. We stage 1 is a problem defined in an explainer document with or without a solution. Also, if there’s anything that people have questions on, with this stuff, feel free, this is what we currently have published. Stage 2, a set of details on the problem, and experimentation is encouraged in Stage 2. We have been adding the proposal into the sourcemap generation or the debugging.
+
+JKP: We have this list here https://github.com/jkup/source-map-users, this is linked in the slides of all the sourcemap users, this is incomplete. But I am trying to find a list of every project that generates or reads sourcemaps. These are programming languages, and bundlers, compilers, all tools and debugging tools and error monitoring tools as well. We have a Stage 3. When the solution has been completely written and at least one implementation in a generator. So this is our first step, after we have a solution idea, we work with some of the generating tools to look into the reality of actually generating that sourcemap information in the project and again the test suite which is also in the TC39 repo. And then in a similar vein, like, when we get to Stage 3, we like to have some confidence that it would be hard to improve the proposal, without actually testing a real world scenario and sorting out things we need. Stage 4 the completion stage. That is in at least two of each of the constituents, reading it, and then 2 of these error monitoring or track traces and the test suite complete. And that’s what we will take to TG1 each other.
+
+JKP: So I wanted to go through the fixes we have already sort of merged in. A lot of minor, but I think they have been a good exercise in taking a specification that hasn’t had a lot of attention and going—there’s a phrase that got used yesterday, documenting the reality or something like that. We have been trying to just take a survey of how everybody is currently working with sourcemaps and update the spec to be an accurate representation of what people are already doing.
+
+JKP: We have this version field which must be the integer 3. We have the mandatory thing in the spec of like—this is for JSON hijacking. There’s a sourcemap over HTTP it should look for and remove the closing brackets. We have modified the specs, which is what the tools do and check for. We have both sourcemaps like a singular sourcemap and also the idea of index maps, which can be like a representation of multiple sourcemaps and found this URL field that was entirely being unused through the constituents. We removed it. Precedents. Multiple ways to tell the debugger where it is. One is a sourceMappingURL comment in the code itself and the other is `sourcemap` HTTP header. Folks with sentry, we got a test suite using all the modern bundlers to write the comments and add the HTTP headers and test on generators. The HTTP header takes precedence. So we put that into the specification as well. We have been working with folks in the WebAssembly world, making sure sourcemaps have a good debug tool for JavaScript and WebAssembly and CSS as well. We have done things noticed in the sourcemap, only applicable to JavaScript and CSS. And so we have redefined a lot of these things. What is a `column` in the WebAssembly world? Defining that as byte offsets as opposed to what a column is in the JavaScript or CSS world.
+
+JKP: We have this kind of interesting thing. Where a feature came organically during the time when nobody was working on the sourcemap specification. Google added this ignore list, a list of files that ought not to be used in debugging for various reasons, maybe the library files are—basically they would be skipped. They added this experimental `x_google_ignoreList` via the angular project and chrome was reading them and Mozilla readed them in with Firefox. So we moved to a `ignoreList` and deprecating the `x_google_ignoreList`. It still works in browsers, but we marked that as deprecated. In general, we have had discussions about the future of these `x_extensions` and decided they’re not the way we like to move forward. We want people to get in the task group and we have marked these `x_extensions` as no longer supported.
+
+JKP: Similarly, worked on trying to make our spec link better with other existing specifications. A lot of self-describe things, around Wasm, through the specifications and deep linking to the Wasm specification wherever possible. What is a Wasm module custom section or the names binary format, sourcemap links directly to the W3 spec for Wasm, so you have fine grained and well structured specification text over what these things are and how they work.
+
+JKP: And similarly, we had a lot of terminology around fetching sourcemaps with no description of what that meant or how that works. We rewrote the specification so whenever we talk about fetching, that’s in terms of the fetch API. Those were we—we had this concept of correctness and trying to update the specification to match the world. That’s what we came up with there so far.
+
+MF: You mentioned that the `x_` extensions are deprecated. Can you explain what you mean by deprecating? What would that mean for an implementation to deprecate them?
+
+JKP: For all the things we have done so far—sorry, I just asked about this yesterday. What is the opposite of normative changes when—these are editorial changes in the sense we are not requiring any changes on the generator side or on the debugger side with these things. They are still keeping the x_google_ignoreList. We used to have text around it as something like if you want to experiment with new features try using X prefix. We had the spec text that we didn’t really like. We went through the process without. We removed the spec text suggesting you use these X prefixes to experiment
+
+MF: Okay. For the existing ones they are documented and just recommend –
+
+JKP: The only existing one is the `x_google_ignoreList`. Those are still documented in the spec. They are marked dep deprecated but not removing anything. We removed this editorial suggesting how you experiment with new features.
+
+MF: okay.
+
+JKP: Thanks. So this is an open question that we have which is—I apologise. I am never sure what the TG1 context is. With the tools generated files, like minifier or removing types, they add a comment at the end of the type that sourcemapping URL and points to a file name. Without the HTTP headers, they locate the sourcemap file.
+
+JKP: So the specification is extremely vague about this. It just says that a sourcemapping URL comment might be there. So we have been trying to flesh this out, and say exactly what that means, where it is, how to retrieve it. Our first step that I think NRO took was, trying to make a regular expression to parse it. Which is really hard. Maybe impossible. I think it might be impossible. Yeah. To get something you can verifiably prove is a JavaScript comment. But it’s fast. But it’s inaccurate. And then the other option would be to parse the generated file. It’s unacceptably slow for tools like IDEs to do quick lookups. We have a draft PR which I have linked in a slide, publishing for an either or approach. The way to look up a sourcemap is the sourcemapping URL comment.
+
+JKP: And basically, giving two approaches. If you need to do it quickly, you can use a regular expression. And then if you need to be—like to be extremely accurate, you can parse the entire generated file and then walk that, which we have specified. I think this was something that came up in the task group committee, people were unsure on this type of thing hypothesized. Is it okay to have the either/or text. It has to be okay in some sense. Nobody is going to switch to one approach or the other. IDEs will not start to parse the entire generated file. And yeah. This was like something that we are kind of curious about. Trying to work through. Whether this is an acceptable solution to have two different means of getting this comment. I am not sure if people go on the queue and answer that later. Or… or not. But I have linked the issue in a later slide.
+
+JKP: Then the three new proposals. Two of them have been progressing pretty far. And one of them is in the early stages. I wanted to talk about these. And get some feedback, or communicate—oh yeah.
+
+LCA: Let me repeat that question. You mentioned this either or approach. I assume this is for parsers. That does include wording that the comments that tools or that generators generate must be parsable?
+
+JKP: That’s a good question. NRO might have a better answer. We were considering this must be the last line after a new line. Something or the generators, but then immediately rounding into other proposals that have come through. At stages that also want something added to the last line. For example, people want this debug ID. I am not sure how far we have come on that side. You are correct, this spec text is only this—only for the parsers. For the debugging tools.
+
+NRO: Yeah. The comment can also not be on the last line, there are many cases in which the last comment is close to the last line, but followed by other comments, which makes it easy for the two results to give different results. Yes, I am not sure about what is the current state of the request. It’s a good idea to require the tools and comments in a way that is—that regards to which approach you take, you get the same result.
+
+JKP: yeah. I guess the short answer is, no. We don’t have any current proposed specification of text for the generators. We should talk about it after and ask to come up with something.
+
+LCA: Thanks.
+
+JWS: Yeah. I heard you mention you sort of spec in reality, and I understand there’s some generators or some—or reading the sourcemap, the version 3 was like sometimes a string or a number or it wasn’t even being looked at. The spec is just three. I was wondering what the story is about. Will you talk about doing the for in future or it’s just from now on it’s just 3 permanently?
+
+JKP: From now on, 3 permanently. With the first question, it brings up a good point. We are trying to be extremely careful, like, this feels tricky. Where what we don't ever want to do is, for example, tell the parsers that they need to do something different. For example, in chrome, if they check version and there’s no version field, they salvage the sourcemap. When we talk about something required, we talk about it only in terms of the generator's side. Like, in order for a generator to be valid, they must have a version and it must be 3. But we don’t have any—any text of that sort for the consumer side. Saying, if the version isn’t there, you must throw. We don’t have any text like that. The parsers are free in all ways to try to salvage or assemble sourcemaps invalid because that is the reality of the situation. So whenever we make any of these speck changes, they are from the perspective of the generator, what they should be doing and less about the consumer, because they are free to do their best to salvage it.
+
+WH: On the topic of the question currently on the screen, what are the consequences of some sneaky user sneaking in a sourcemap URL which this thing parses incorrectly?
+
+NRO: Yeah. So well, the consequences in general sourcemap URL is that when you are looking at your code, in your browsers devtools, if you have sourcemaps enabled and they are usually enabled by default, you don’t see the source code running and the track traces. Potentially, you could I guess have some malicious code that is hidden unless you look very closely at it. And with this two approaches, I guess having the two possibility things to get the comment might make it harder for tools that want to make sure there is no sourcemap attached to a file, to prevent this code hiding potential problem because maybe they have looking for the comment following one of the process, and debugger, the developer using might follow the other approach. So, for example, you could hide the comment in a template literal. And a tool looking for the comment with a JavaScript parser would not find it. While the tool looking for the comment with a—like run a Regex on each like will extract it. So maybe the sourcemap wouldn’t have applied to the code when you try debugging it while the sourcemap tool did not catch it.
+
+WH: So can one get browsers to fetch things from attacker-controlled URLs and can that cause problems?
+
+NRO: So browsers only fetch sourcemaps when devtools are open. So that’s not a risk on the web. In general, yes, if the devtools are open and the browsers see the sourcemap, they will fetch, like do the HTTP request.
+
+JKP: I think this is a really good call out. This isn’t an area we talked about. At least for me, the browsers themselves have code, security code around this, but I actually have no idea. So yeah. I do think that’s a good question. I can make an issue for that and talk to some of the consumers if there are any protections like linking to something malicious in the sourcemap URL.
+
+WH: Okay. Thank you.
+
+JKP: We have these 3 new proposals. Scope is a large one and range mop is smaller and debug ID is more of early stage thought than a full-fledged proposal.
+
+JKP: So just a quick again—I am never sure what the context is. A JSON file with version, which is 3. So they have an array of sources which is an array of your original files, these could be TypeScript or JavaScript or C++ using WebAssembly. Just when you use a tool that bundles or compiles the code, the source arrays are the original files that went into that tool that generated the sourcemap. If you run three JavaScript files through a JavaScript build and it outputs an outputting, it will keep track of the 3 original files. Mr. That is the source's content. An array of the same file, but the inline content. This is an optional field but a lot of tools include this. When you do the lookup you can go to sources content. If you imagine sources had three files. Then sources content would also have three entries in the array, which would be the inline and escape content of those three files.
+
+JKP: And then the last bit is the mappings and the mappings provide basically whenever you’re stopped at the break point in the generated code, the mappings provide an ability for these tools to look up exactly what place in the sourcemap that generated code came from. So the mappings are the bit complicated. They are VLQ data. So there are groups separated by semicolons and inside the group are the segments, tokens and point to, and comma separated. Variable link, coded and made up of 1 or 4 or 5 fields. And so basically, each one of these AAAA or all of these liken coded things contain this information.
+
+JKP: So the first bit is in the generated file, which column are we at? When you hit a DebuggerStatement, we are on the 50th column or something like that. If you go 50 columns over in the generated file, and then the optional other ones would be to index into the sources array. Let’s say you are in bundle.JS at the 50th column. The next one, 0 with index and the sources are the first file, which would be this [p] index.JS file here. And then now in the source file. It contains a line and a column. So basically from any point in the generated file, which source file it points to and in the source fill which line and which column and so this allows a mapping from you hit a debugger and a big minified bundle and it finds which source file it came from and exactly which line and statement the debug came from in the source file. This is how they are able to map so you land on the debugger in the source file. The optional ability to add the names array. We will talk about this later. This optional array under the sourcemaps. But not well specified what you can use it for. It’s there and browsers use it for different things. If you have a generated file with multiple lines, those get represented by a sentence colon. It has a line and a column. They generated only a column. It supports line by prepending semicolons. One of the mappings is when you get stopped somewhere, you look at the Mappings list. Which line and which column in that source file. This is how these debuggers are able to do that.
+
+JKP: So here's a visual example. You can see you can hover over any bit in either of the original code or generate code and the arrows point. A debugger statement in the generated code, line 4. And it knows it maps back to the 0 index.JS file at line 2 offset 2. Basically, if you decode the mappings, which point am I at, where does it point to and which index file? Cool. So scopes. This is a big proposal. So this is an example of a sourcemap working. You are in some devtools and you have a debugger in `bundle.js` and had a map. It comes from this point and is great. So you can see where the debugger was then in the source code. All the browser devtools support debug and one is the ability to display scope. Maybe there’s global or module scope and the variables you have access to, all this really useful stuff when debugging. None that is encoded in the sourcemaps. It’s this thing that customers want, users want the ability to access all this information. Yet it is not present in the sourcemaps. It’s on browsers with no specification to come up with how exactly they will implement this. And this is the way it’s been for many years.
+
+JKP: Basically, all the browsers have taken different approaches. Chrome has big expressions. Am I in a function? Parent functions? What scope? They assemble all this. It’s quick and lead to bugs or, for example, the generated variable names could slip through instead of the course variable names. You could have these inconsistencies and Firefox loads Babel and parses the generated file. They can be more sure what the scope information is. But that’s quite a bit slower. So this is like an area where basically all of the debuggers need scope information and variable information and they don’t currently have it. So they have to go out of their way to generate it in an unspecified manner.
+
+JKP: It’s more than just the scopes themselves. And the names. Because a lot of tools these days will do quite a few optimizations and steps. For example, terser can do constant folding or function inlining. You could end up with a file with a break point that not only doesn’t have the scope’s information but you are not aware it wasn’t a function or another function. You lose all of this information. So this is essentially impossible for the browsers right now to reconstruct. They don’t have the information about what was there.
+
+JKP: And so basically, we came up with some goals. We merged two original proposals. The goals here are to be able to reconstruct and step through inline function. Any optimization that compilers do, we should be able to actually recreate all of the functions used to exist. We also want to do variable mapping. So a lot of tools will minifier and mangle function names and not look up the original names. So we know we are in a function called foo and we have two variables named one and two. And we have a high level of confidence that this is what was—how far it was authored in the original code. And along with that, we want to build and reconstruct scopes. Again, like I showed in the devtools, to be very sure that hey, at this point in your original code, however it was authored, here is the scope information that you had here. Here is what you access to in both of local and the global scope. And so we have a proposal to do this by adding 2 fields to the sourcemap we call `originalScopes` and `generatedRanges`. And so essentially the way this works is that here’s a sourcemap. It has the name and version of mappings and then the 2 new fields `originalScopes` and `generatedRanges`. It works, the generating tools, the bunned letters go through and add for each scope that they come across, add a scope entry to the `originalScopes`s array. This would be like as they are parsing and bundling the file, they would keep track of each scope and its original position, end position. Like a function or a global scope, and optionally a name and a list of all the variables that are inside of it, this is a tree with nested scope. Basically, this is information that the bundlers already have access to. It’s a big part of their work flow. As they are minifying or compiling or bundling code together. This is something new they would be appending on to the sourcemap as they go.
+
+JKP: And so yes. This—you end up with the array of nested scopes and this is hey, we had a function here. Two variables. Here are the names. And it was a function scope. At the same time, that’s like the one side of T we also have this section of these `generatedRanges`. These are start and position ranges. And again, just like we do with the mappings, in sourcemaps today, we basically from any debug point, we would see, is there a `generatedRanges` that I found the position of. If so, you know, do I have a scope that is replied to me essentially? And that links to one of the `originalScopes`. Basically, this would be quite a bit more information embedded in the sourcemap. But it allows a fast and accurate ability to reconstruct exactly the state you are in with the original code.
+
+JKP: Very happy to talk more about that later. But it is like a proposal that we have been working on. I think it’s also linked to, if not, GitHub in the proposal’s fold. This is what we have been working on for the scopes proposal. This is the big one.
+
+JKP: The next one is range mappings. We have run into this quite a bit, where I guess the TLDR on it, when you have a multiple step build process, TypeScript that generated JavaScript and terser which—when going through the multiple steps, you have to like take the sourcemap generated at each and apply them to each other combining them. And the way this apply operation works, a token has to be available in order to persist through. What you end up with is in the multistep processes, you get these extremely low fidelity sourcemaps in the end. If you ran through 3 or 4 compilation steps, removing whitespaces or comments, you end up losing a ton of information and so the effect of this would be when you set a break point in a browser debugger sometimes the best it can do it take you to that file or the beginning of that line because it’s lost a lot of the fidelity for where it was.
+
+JKP: And so the range mappings proposal is essentially to just be able to mark them as a specific point or as a range. And these ranges would allow when the apply operation happens, to not lose fidelity. We could keep track of the—adding an additional marker to allow projects that combine sourcemaps to obtain a high level of fidelity so that it’s much easier to get more accurate debug information when you step through. I have a link here as well to this proposal.
+
+JKP: The third one which is much earlier stage and I think this one has some trickiness to it. It requires more than just TC39 approval, but people in the error monitoring fools, they wrap the error. Object. And when your application crashes or has a bug, they send to the staff service. Along with a sourcemap. They run into a lot of these different problems where sometimes it can be extremely hard to look up which sourcemap an error applied to. Because of sourcemap URL is applied to the generated code itself, it’s hard to map—we have an error. Which file was that from? It’s hard with outdated sourcemaps. Some tools have content hash in the file name and sometimes not. Some feedback that does the stack trace analysis, it’s difficult for them to guess from which—from the stack trace which sourcemap or which version of that sourcemap to be played.
+
+JKP: So they have come up with this proposal for the idea of adding a debug ID to the sourcemap, which we are happy to do. That seems easy for bundlers to do. The difficult part is getting the debug ID out when an error occurs. In order for this proposal to progress, we need to work with WHATWG and come up with an acceptable way of how to get it out. This is at early stages, but something we have been working on. That is it for me.
+
+BEL: All right. We can go to the next slide. I will be talking the approach for testing sourcemaps because there’s quite of about how you normally test new premises in this community, I think. Let’s go to the next slide
+
+BEL: Yes. So how sourcemaps actually—what is the life cycle of sourcemaps as I called it? It consists of two main steps: the first one is generating the sourcemaps. And the second one is consuming. It seems simple. But there are many ways to generate sourcemaps and many tools that do it.
+
+BEL: And on the consumer side we have a lot of different tooling. Devtools, and also error monitoring tools that actually consume the sourcemaps.
+
+BEL: Yeah. So generating a very simplistic diagram, you can see we have a source code. Generator takes the source code and I couldn’t tell it to put generated code and then also output a sourcemap. How do we actually test generators? This is one example. Two different generators in the same source code and outputting the same result. You can get that. But then, because of the implementation details, actually, the source code, the sourcemap that results from those two different generators is quite different. So in this case, SWC has different mappings than Babel. Because Babel is adding the names of the functions just by default, where SWC is stripping them because they are still available in the generated code. It’s not that easy to test generators based on input/output.
+
+BEL: So here, you can see those mappings illustrated and visualized. So in Babel, you have The Bar function name. And then opening and closing brackets. But SWC, that’s just like altogether in one position. So basically, that’s why we have different mappings here. So the question; how do you actually test generators if they can generate completely different outputs? The solution for us was to validate the mappings they produce instead of having like a set of dash, yeah. I couldn’t tell you that we will check again.
+
+BEL: The first thing that we check in the validator is to check the format. We want to—like, the sourcemap is basically including correct fields in correct format. Then the second thing we are actually checking the source files. Are they referencing the right source files? Do they exist? Are they accessible? And the third thing as long as we can, because we can’t do exact mappings, but check mappings in the sourcemap. For example, we check if the mapping is actually pointing to something that is existing in the original file? So that’s the generator. You can check it out in the repo there. It still didn’t have a title. There’s a name, so that’s why to do name. But basically, how would you use it to parse a sourcemap, parse the generated file in the original folder and validate, is it a valid sourcemap according to the criteria. That’s the first part of the journey of testing sourcemaps.
+
+BEL: Now let’s move to the second part. So that’s consuming sourcemaps. How are they in general consumed? We have browsers. But we also have engines. So for example, Node.js can use sourcemaps because you can have TypeScript and then you need some kind of mapping to JS in stacktrace—that’s possible. Then in browsers of course we are minified bundles. The first part is you have your browser or JS engine. You have generated code. You have your sourcemap. Then debugger is using that. It’s getting the source code from the server. It could also get the source code from the sourcemap. And then on the other side we also have error monitoring. That’s usually using sourcemaps for checking the stack trace. Function names. And mapping that to be more useful for debugging as well.
+
+BEL: So how do we test the consumers actually of the sourcemaps? Let’s go to the next slide. Yeah. We have multiple points at which—multiple steps we did this testing journey. The first one is to have a very extensive checklist of what we actually want to test. And yeah. The second thing is we want to have test data. In this case, it is different than the generators. The consumers actually can check based on specific examples and test cases. Because we basically take those and run them through different consumers. So we test the test data. The next to have a test harness. We built test harness for chrome, fair market value and WebKit. And then we also started to implement the test cases in those harnesses. Of course, for test harnesses, it’s not directly in chrome. It’s not directly in Firefox. So if chrome, it’s in the devtools content part of the stack. In Firefox, it’s in the sourcemap library of Firefox. So yeah. It’s not deep in the browser, but rather on the higher layer of the browser stack.
+
+BEL: Okay. Let's go to the next slide. The checklist is extensive. I took a screenshot here. That’s just the beginning. It goes on for quite some time. Because it’s basically adding every case that we want to test from the spec. And that’s—that is also available in our sourcemap tests repo.
+
+BEL: Yeah. And what does the test specification actually look like? In JSON format. We basically have an array of tests. Every test has a name. And description. So we know what you are actually testing there. Then there’s a base file. A sourcemap file. And also we have the flag of sourcemap valid or not. It's useful because basically you don’t want to test against some common cases, when the sourcemap is not valid. It should actually just be an error, but we can talk about that later in the slides, why sometimes it doesn’t happen.
+
+BEL: And we also have test actions. So test actions are testing more detailed cases and, for example, checking mappings. Here you can see the `actionType` is checkMapping. And it basically takes in `generatedLine` and `generatedColumn`. `originalSource`. `originalLine` and `originalColumn`. And `mappedName`. This will basically check if that specific point is matching the point that it should match by using the sourcemap. It can also check the name, that's another thing.
+
+BEL: Right now we have three types of actions: map—checking mappings. Also checking transitive mappings. That is perhaps when you have multiple sourcemaps. And you can just check in one is mapping to the next appointment and the next one map to consecutive point. If all that in the end will result in the—result we are expecting, the JSON. And the last thing is checking ignore list. And there is much more added in the future.
+
+BEL: Then one thing we actually discovered and I think JKP was talking about it before, is that the consumers actually are very relaxed about how to define a valid sourcemap. Which is completely okay. Because we want the web to keep working as it was working before. Right? We don't break the web. The problem was the test cases were very strict and followed the spec. In practice, many invalid sourcemaps were not rejected. The solution for us is to either have some kind of validity of strictness in the test or at least make the spec more precise so it allows those cases that are in reality there in the wild already. Yeah. The example we were talking about as well as that most browsers don’t actually can be for field. But check it loosely or—yeah. Or just don’t. So yeah. They check the string 3 is accepted and Google devtools, I think, like, it just doesn’t check anything. It could not exist and it would be fine. So yeah. That’s one of the things we discovered.
+
+BEL: Another thing that was revealed, more specific issues revealed through testing sourceMappings is, for example, the VLQ value it used in the sourcemaps mappings field, there’s this part in the spec which says, it should be a 32-bit limit for the value. It’s not precise what happens when you exceed that limit. Does that mean the sourcemap is not valid. Should it be capped to the 32-bits? Or what about the bit sign? Should it be included in the limit or not? There’s an issue here where we’re discussing this. Yeah. This is one of the things where the spec should be more precise. So it’s kind of—yeah. The implementation could also follow something pretty precise in this case. I think an example here was Firefox was not actually including the—was not including the sign bit in this limit, where it—we think it should, but yeah. That’s—that’s one example here.
+
+BEL: It’s index maps. So in the spec, index maps versus basically like a few sentences. How it’s defined. And then an example of how it looks like in the JSON. But that seems not to be precise enough also to actually know how to use index maps. Index maps are basically like they have sections of different piles that are sourcemaps themselves. So it’s basically like a sourcemap that consists of many sourcemaps. But we don’t know how—how do you define the sections that must start—must be sorted by starting position and the representation section may not overlap. How do we actually define that in the JSON? It’s not really precise. Because we don’t know how they should be ordered exactly. And what does it mean—not overlap exactly. It’s similar in the part of the spec which says that the actual index maps are sharing the version, the file field, with the regular sourcemaps. And they get a sections field. But we don’t know if, for example, names field could be there. What happens if you actually have the mappings in the sourcemap? And you have the sections? Should the mappings be completely ignored. These things, it would be great to define them a little bit better or at least say which things are optional and which things like which fields should be there as required fields.
+
+BEL: And the last one is all the other small issues that we found. Yeah. So we would like there to be specific types for field, the source root must be a figure. This could—also, the fields could be marked as optional if that really reflects the reality of how they are used.
+
+BEL: Which means, basically, empty mapping. It shouldn’t be, but it is right now allowed by the consumers so add to the spec. There’s one specific one, which actually in chrome devtools, it came out, like, the—if you add null to the `sources` array, it would convert to a null string. Which is—I don’t think we want. So that’s also something that would be great to the semantics on how the sources field should look like. That’s it for the thing we found during testing.
+
+BEL: And for the future proposals. There is highway big separate JSON file with more cases and possibly we will have new test actions for every proposal, and for example for the scopes. We are having a check scope right now the test cases are one big JSON file. For the future proposals, we can have additional field.al files with more cases and possibility will have new test actions for every proposal. Like, for these scopes, we have a check scope field.
+
+BEL: And you can check the sourcemap spec test and the last partAnd yeah. You can check the source-map-spec text in this repo here. And the last part is what are the next steps for further testing. For consumer side, ahead the harnesses into the consumers codebases. We landed this already. So that’s in Firefox sourcemap library. We are working on landing chrome devtools harness into chrome devtools. Then we want to also make sure that the repo with all the sources is actually integrated as a git module for all the consumers. Also add more tests implementation for consumers. Get more test cases, more implementations in there. And for the generator side, better validation coverage for generators. And the last thing is to have some kind of end-to-end test between the generators and consumers to encompass the entire sourcemap life cycle and that’s basically it. Yeah. Thank you.
+
+DE: Sorry for jumping the queue. Because the group has done a lot of work in terms of building this sourcemaps specification. And in the future meeting, maybe the next one we can come back and ask for approval. We really want to know if there’s anything you would like to know.
+
+DE : So the scope proposal is to write detailed information about sourcemaps that will feed. Although this approval much the committee is not excited about doing what John said before, is like a rubber stamp. We want to be diligent on what we approve of now is the time because the specification is basically done. Maybe there’s a couple things to fix up. But it’s basically done. It’s there on GitHub. Next meeting, we may back some and ask for consensus on recommending it to ECMA for standardization in December. The scopes proposal is quite detailed. I am not sure if we should bring it up on the screen and review what it’s doing a little bit.
+
+DE: You know, JKP explained the purpose. Is anybody curious to understand more details about this? Or have advice for what should be brought at a subsequent meeting for full approval? Okay. Because in some way, this is analogous with ECMA402.
+
+DE: where the TC39 will have multiple Darth and we go through eCMA proposal in detail, and people maybe too much detail, people may end up believing that they are like better internationalization experts. Any way, I was kind of hoping that we would have at least some kind of back and forth about sourcemaps and in their technical detail.
+
+USA: And there is a few topics on the queue, and I would request however to meet that there is four minutes left in the time box.
+
+CM: This is prompted by DE’s question and I wanted to put my two cents in. I like the way that you all have stepped up to deal with this particular can of worms and I’m impressed by what you have done. I don’t think your presentation has too much detail, nor is it lacking in detail. I think it is fine, and I very much like the fact that TG4 is working out a style of working that is relatively autonomous compared to, say, EMCA 402, which I think could probably benefit from being a bit more autonomous, and so in general, I am in favor of the approach that you are taking.
+
+RGN: I agree with CM and I have been following along with the proposal, somewhat on the sidelines, and am very happy with the thoroughness and the sophistication with which it has been approached. This presentation further cements that opinion and I have no other comments because the work appears to be going so great.
+
+TKP: First, I want to second that the details work is quite nice, I am mainly curious about this version field that will never change, and mostly, everyone ignores. So why do we have it? I might have missed it in the comments.
+
+JKP: No I did not explain that and it is a historical relic and in Google when this release a new iteration they would up the data field and that the point no changes have happened since then and then we made the decision that we don’t want to keep doing this version field thing. So as far as on the consumer side of them, I think it would be well okay for us to change the specification text that you don’t need to have that there but I suppose it is a bit tricky to be the safest possible that no one throws accidentally but before our time that the way they were doing and they were on a revision, 1, 2 or 3 on the spec maps, that is just a historical reason.
+
+KM: Um I guess I don’t know if the version number could be used in a similar way to the Wasm version number and if there was a breaking change and then browse are can support this for a while This is less of an issue for sourcemaps because this is not like shipping on the web where there is not the same deprecation impossibility that Wasm has.
+
+JKP: One thing I have not done a great job with is trying to write down some of our—I had a note to myself but our group so far would help to not have to do that, and we are really trying to not do that but I think it is not having the field there and like that we could especially if we are in a situation where we wanted to do that thing we have that option but we don’t want to do anything.
+
+DE: I think webassembly is doing that and they have not incremented yet and I don’t know if we have that option with sourcemaps because so many consumers ignore the web field and because at the beginning of the error there is a different version. So maybe, we will see. Moving on.
+
+JWS: So if I saw that you have depth indicated some google specific text, and it was similar if the reality is that consumers are not looking at the version, could that also just be depth indicated also. And I think one thing that I am really trying to work on and because I am so sort of newish though I have been working on last year and I still don’t feel confident that I have a great list of every sourcemap in existence and driven by a nervousness to remove something that might cause certain projects to throw if it is not there and my personal preference is to keep it even if we mark it as deprecated and so we have the files array that has ever been marked mandatory and no consumers that we found, and I think we do have some room, but I will definitely look for it but I will be hesitant to move.
+
+JWS: So if you have the concept of mandatory optional text and I don’t know if that an optional spec but it could be optional.
+
+DE: What we have been doing already for things that are not used and not generated is just removing it from the specification. So I think that if the reality is that something is not used, we already remove it and we don’t have to deprecate and deprivation is things that is still used sometimes and generated and I think with program correctness TC39 we don’t have this and so if we have a tooling space there is more opportunity in particular we want to make tighter requirement for sourcemap generator than sourcemap consumers.
+
+JKP: Like with the in the next map and we could find nobody using it and we found comfortable using it but with the coding, it is flexible and it is checking for the version for the consumers and they will move on from that.
+
+DE: Sorry when I say in this case I was speaking not about the verse. Not responding to but rereading the question.
+
+NRO: I want to clarify using browser from index and for the version we suspect that maybe nobody actually checks it but right now, def tool generate the—it is more comfortable with just removing it.
+
+PST: Um, I just want to know if there is a way I could add a consumer to the test, because XS support sourcemap types seem complete and I don’t know if sourcemaps can test—is there a document that is describing—what should I do.
+
+BEL: So that is linked in the presentation, the sourcemap tests contain resources that you can use to implement the request and it has the big JSON files with all of different test cases and we don’t write harness for every case of course, but you can definitely use it for your project, and for the engine and yeah. Feel free to do that.
+
+JKP: Just FYI right now the harnesses are built on and we took the source code from WebKit and took the minimum subset API’s and if you have a consumer that does not have those differently wee have to work on those but we would be happy to and to your point how to add consumer, I will make an issue to add that because we don’t have that documentation.
+
+NRO: So if XS implements sourcemaps we will love for you or somebody else to generate multiples and mostly because we are implementing feature and so we are going to have implementers in the room that will reach out to you.
+
+RPR: Just to echo RGN's part and this is as high quality exercise, and what is going on is that it looks like everything is on track. Also, from what I have seen, in terms of the set of generators and the set of consumers who have already been engaged or involved, that is one of the success stores is that set. And I am still left, for example, I think each has Dev Tool and we link into that and on the tooling side and generation we have TypeScript and Babel. And are there any major consumers or generators that you think at the moment have not had like radio silence or have not reached out to?
+
+JKP: I have a personal wish list and I know everyone has been friendly and my bandwidth is difficult and my nervousness I would love to engage enough with the exacerbate and compiler community and the things that we are doing generate source notes as well. Similarly, the more WebAssembly people we have the better and I know it is an area that is into the necessarily speciality for me but we have representatives from coat land WebAssembly team assisting but I ain’t to get the WebAssembly story right and I know there is a large company that do that the stack decoders` out there and sent email to this large companies and the SAS company that do the cool air monitoring stuff I would love that and I see NIC’s comments and Apple and we don’t have anybody attending our meeting from Apple but I have spoked to people and I know our proposals are being seen, and I believe there InOrder Successor major issues right now. But at some point with the test harness we will want to put up a PR to the WebKit tools. So that is maybe another area I want to make sure we are not doing anything wrong and that is really late down the road.
+
+NRO: I will try to submit the harness too and it is great to have somebody from WebKit that could join our meetings.
+
+RPR: Do we know who the contact for what WebKit tools is or?
+
+MLS: I am well aware that TG4 went people from WebKit to participate and so like they are not there yet.
+
+USA: Okay, well that was part of queue and now we have been over our done box but we have a few time and we will take a minute to do a data summary or?
+
+### Speaker's Summary of Key Points
+
+So we presented on the TG4 sourcemaps process constituents and editorial specification and active proposal for new features and we have downloaded and discussed both sourcemap validation tooling and automated assessing and not seeking advancement on anything at this time but hoping to spread awareness and engagement, and received really good actionable feedback. Thank you.
+
+## Nova JavaScript Engine—Exploring a data-oriented engine design
+
+Presenter: Aapo Alasuutari (AAI)
+
+- [slides](https://docs.google.com/presentation/d/1Pv6Yn2sUWFIvlLwX9ViCjuyflsVdpEPQBbVlLJnFubM/edit?usp=sharing)
+
+AAI: Okay, I am Aapo Alasuutari and I was here to repeat talk that I did at the Web Engines Hackfest about maybe research engine javaScript engine that we are building with a couple of people, called Nova. So this is exploring a data-oriented engine design. And first about me, and Nova. So, I work at Valmet automation here in Finland and I hope everyone coming from abroad here is enjoying the place, and finding the University, and technical University atmosphere fun, and tech cat University student myself and I graduated and I love these places.
+
+AAI: So at Valmet we are building a browser-based automation control system user interface and a lot of words and automation system, and it is not the system that is not based but the engineering tools. And UI performance is not like NaN second and you don’t care if it is 200 or 400 Nanosecond but if it goes into multiple hundreds of milli seconds, you will start to really suffer quickly and you can kind of maybe see where I am coming from with the data oriented side here. Um, the nova engine started in 2020 as kind of a joke—how hard could be to build a JavaScript engine? We did not have a particular goal but let’s find out. But then last year from a guy who worked in the industry and I learned about components and data oriented design and immediately I thought, wait what about a javaScript engine built with an entity component, and wouldn’t that be fun. And maybe that would actually be a revolutionary, maybe. And so okay, what on Earth is data oriented design? How many of you heard of the word? Yeah, okay, excellent. First of all data-oriented design and you must know your data, and you must know what you are working with and if you don’t know your data, you don’t know your problem. And if you don’t know your problem, you cannot solve your problem. So knowing your data and how it is common used cases is the most important software development, and if you know your data and your common use cases then you can use your data structure to support those cases and I could give dumb examples but maybe we don’t have time for that.
+
+AAI: Your program when it runs, whatever the program does, it does not touch one thing once. It does not go to that string and read one byte from it and once and then again. It does multiple things must multiple things and common things over and over again. And it does loops and iterates and has algorithms that does this over again and as programmers we need to think of these multitude and if we don’t think about the multiples and the loops, we are doing the wrong thing. And an important point is that a computer when it loads data, if you have asked for a 64-bit integer, it will load a cache line. So when you design your data structure you should aim for the data structures support this cache line loading behavior. And you want to use the most amount of data from that cache line. If you load a cache line and use a single byte, you have wasted time.
+
+AAI: So though it is painful you should be ignoring the singular case. If it is a one-off thing and if it happens once or rarely the performance does not matter unless you are building a performance measurement and that you are compact and doing defined property or something. So in the javaScript engine what do we know of our data? I think we have the foremost experts here so we can make some good guesses. What do we know about objects? I have one answer here but does anyone have any good answers about what happens on objects normally? Properties? Yeah. What do you know normally do with the properties? You read them and write them.
+
+AAI: What you don’t really do that much anymore is delete them. Hash map objects used to be really common but not so much anymore. And we have proper hash maps now so we don’t need hash map object and the most common hash map object is which property will exceed unless it is looking for a prototype method, and rarely do you do hash map orbit and reopt would be the common anti-phase to this but they support maps also. And when you do an object’s property lookup, you need to access the keys and you need to search through the keys. Of course, engines do various optimization to avoid this is much as possible, but at the theoretical level, you need some way, some abstracted way to check that you have served through the key and the particular key that you match. Then when you find that match, you get the value. At most, one value. If you find no maps, then you don’t access that value and go to the prototype chain.
+
+AAI: What about arrays, what do we commonly do with arrays? Withhold them. That is basically what I am doing all day. Withholding them. Arrays we normally access about index, right? And with by index lookup and from the spec perspective it is exactly the same as object property lookup, but from engine perspective there is not the same. You go to the elements and you know by index which place you will access and there is no key check necessary, you only need to check that it is within the length of the elements. And then you get that value. If there is a hole in there, now you go to the prototype check.
+
+AAI: Array buffer, byte length, data view get Uint8. And any prototype method but also we have these kinds of things that we think are methods on the—sorry properties on the objects like array buffer and these never match a key and yeah somebody might put a key there theoretically we need to support the case, but it is never going to happen.
+
+AAI: So, here is an example of an array of two properties in V8 and specifically created from a literal map from JSON but any way this is what a single cache line looks like and v8 already knows that putting stuff next to each other is, good business. So there is the prototype pointer or actually the shape pointer, abstract how the keys are laid out and elements pointer properties pointer and the two properties that were define on the object. They are in line in the same object. So you don’t need to go through the properties. How is it to get P1 object. You should not think about that. If you are thinking of the one case, you are wasting time. We can think about how well we are using the memory that we loaded. And to get to the P0, we need to check the prototype or the shape or the keys. And then we get the P0, and so we are using 16 bytes out of this cache line many and if we are mapping cache through this and getting P1 and the next property there is on the same cache line. It is not bad but it is not great.
+
+AAI: Now here is an alternative, I have split up 8 objects into various cache lines. One cache line contains the prototype pointer for each object and one cache line will contain element pointer or each one contain property pointer for each and then I have should be actually two more cache lines where the properties are laid out. Now, if we think about mapping over these, we can actually think of the speed. It would be about maybe 400 nanoseconds and we can think of the cache plan prototype and properties, and we are caching everyone objects and every single byte is used and the two properties cache lines, yellow/red cache lines there we access every other byte essentially. Every other 8 byte section. So in total, we use every ¾
+bytes, better and we can go lower, but then you know it is tradeoffs.
+
+AAI: So data line improve cache line usage but there is a cost. What is the cost? An object cannot be a pointer. If I split the object data into different cache lines, how do I have a single pointer that points to every single cache line? That is not just really a thing. Okay I guess you could do like object offsets or pointer offsets, but that sounds kind of painful. But an object can be an index. If instead of blindly splitting to cache lines, you split to different vectors and every vector on the same index has data for the same object. Now the object can be the index into these vectors. Um, but that then means that because we are using vectors to store the heap data, now the object data itself must all the same size. We can’t change the size of our vectors kind of depending on what sort of stuff is in there. And it is build time only.
+
+AAI: This then means if we are thinking of like is this object an array buffer or what your object your simplest object would need to be as the biggest object can ever be, which is pretty big and pretty wasteful. But what if we just have different kinds of vectors? Or each exact object we have its own heap vector, now we can indeed have like perfectly sized vectors for each type. We just need to know which vector we are accessing from. So our javaScript value becomes a tagged union and it has a byte size tagged which vector what UDP should look into and it has the heap index and what index in this vector should you access to find your heap data or of course you can have stack data in place of a heap index as well.
+
+AAI: Um, so, as we kind of saw the previous slide we get better cache line by loading over the parts with a particular action. When we are getting stuff, we only need to get the prototype and the properties. So it is that all that we can do? No. Often we don’t really need the object aspect at all. So for the objects, what we do is that we actually drop the elements out of the engine entirely. Our objects are never array-like custom or not accuse custom to be array-like subjects. But you know you lose some and win some.
+
+AAI: For arrays, we don’t have any properties or prototype array. That seems like I am breaking the spec here and kind of, but no. So we have the elements pointer for the array, and we also keep this kind of you can think of this as a custom internal slight. And a backing object lot and array and if do you something stupid and change the prototype or assign named properties aside from length, then we are going to create the backing object, and the internal methods of our array all kind of work by checking,
+“Are you doing something stupid? If so go to the backing object.”
+
+AAI: For the other array, there is no properties or elements. And if you are doing something stupid with this, go talk to the backing object, otherwise, you will only get the array buffer features or the specific features for that object. So this is what our array heap data looks like. For or less there is a backing object and then the elements in the same structs. So this is not split for different cache size and so the backing object if it is not there, then it is a realm and we know which realms intrinsic we should use, to create it later, and that is 4 byte and a 32-bit integer basically and the elements that has a 2-bit integer which will give will length and then some stuff that will give us information on if the length is still writable and the well how to access the elements. And if we really want to kind of start optimizing and we can take the stack down to 4 + 8 byte and then we can go down to 4 +
+4 + 4 if we wanted to and that would mean it would optimize length access better.
+
+AAI: And obviously the common case for an array is to access elements or the length, and if you are looking for the elements and then you need to access it and so splitting it apart does not do much and this pessimizes, so assign it to the backing object. Any way you can think of this as like reversing object wrapping. So the elements themselves, they actually live in heap vectors. And they look like this. So, that is kind of a mouthful to say and lead but basically there is a vector arrays of a given size. So if it is 2 to the power of 1, that is 2 and then we have a vector of 2 array values and on the side you have the descriptors which is a hash map of particular index, and why? Because nobody does descriptors and why would you keep them or create them? Just throw them out, and if somebody does something stupid, then okay, we will create that descriptor for that particular index and accessing will be slow. But you should not be doing that.
+
+AAI: So okay, we get improved cache line usage with this but this only works if the items are on the same cache line in the first place, and indeed, um, and luckily in GC system and we have axiom that most objects die young and that is why we do GC in the first place and if they were not true, GC would be mostly painful. And there is a corollary that I made up or kind of: and most objects live together, created at the same time, used at the same time mostly die at the same time. So, as we are creating all of our heap data, if they are created at the same time, they are created one much the other. They are most likely on the same cache line.
+
+AAI: And when we GC our heap, we keep these items together. If there is—if items die from the middle and we pop them out and shift all the data down. Sounds like a future amount of work and I agree but does not mean that our vectors are always packed and the data that are together will stay together and we are likely to get this cache line performance going forward. Unfortunately, that means that our values need to all change. Our value objects points to index 300, and now 30 eye teams in index 300 before has been shifted out and now index should be 270 and you should change the value and relining the index. And this is simple to calculate and done parallel. This is not different from what most engines do. Apparently javaScript does not do this V8 has some packing and so on.
+
+AAI: But this does mean that our identities are even likely to change in GC and we do rely on objects identity sometimes, so that is like um, let’s see, this will come up later. But okay, as a consequence, our heap does not fragment over time, and with the vectors, we actually also get kind of a separate nursery for free and we can design our nursery will start at index 100 and anything above that is new space and anything below that is in the old space. And if you are assigning stuff to old space, we mark that dirty or should mark it dirty and later in our minor GC we need to start the GC from the different parts but otherwise, this is kind of free.
+
+AAI: So in conclusion what are the upsides of this? What do we see in the future of benefits from this kind of engine design in first of all obviously reduced memory usage because we are not creating the object-based for each array for each array buffer? We could actually slim down the object itself by 12 byte if we really tried, and we will not try that much. We also get the excellent cache line usage or the at least the potential for excellent cache line usage and especially considering this a dynamic language we are talking about and given that vectors are simply to read about and this is easily for a user to read about and if you are create object in array they will be next to each other and things work that way. Though of course, users should not rely too much on engine specific, but whatever.
+
+AAI: Um the vector-based heap since it is just vectors, it is kind of simple to reason about. And the type union value JavaScript value that we use, it has no pointers. There is no way to get a JavaScript value from our engine and follow it back to the heap, and do the dirty stuff and there is no way to do a type confusion attack and the tags never change, and you can think of the tag as kind of being an identifier of which part of the spec is this object identifying. Is it a function with no external slots? Is a function with promise of internal slot? Each of these has own slots and internal slots and internal methods never changes. So if you are an attacker and you change the tag of some value, that does not allow you to reinterpret data in a different way. It changes both the type of your data but also where it points to. The vector, it points, the vector that you access data based on the tag so this is not type confusion. This is just a very weird way of changing your value to another value. And we get currently the value of 64-bit there. So a lot of stack value there and it is quite nice, this is quite a nice way to separate the exotic objects from ordinary objects. They just ask the ordinary object to do the stuff for them. But there are of course downsides because you know, trade-offs are a thing. So when you pessimize and so the array objects will be bad. So when you assign an index property into a normal object, it goes to the end of the properties, and I am not going to reshuffle the properties so fit the key properties, index properties at the very top. You are just doing bad things, and you shall feel bad. If you want—when you get the object keys, I will then rearrange them to the spec required order. And exotic objects if you assign properties to them, they do get an extra indirection.
+
+AAI: And I said all of this internal slot case, they get their own tag and they get their own heap vector. And if they don’t do that, we will need to add the possibility of that stuff to the general case and that is not nice. Skip over that. Obviously the performance of GC remains to be proven. And the GC does exist, but it currently just mostly empties the whole heap. And it is not very great. Everything will fail after you have run it. Um, and whether or not the heap compaction and the index realigning can be fast enough, I don’t know, it shall be seen and I am hopeful and hash maps, hash maps damn and those will be nice and so every hash map you need to—so you will need to reinsert another slot. That might be very bad. Obviously as the heap size grows, these problems will get worse and worse.
+
+AAI: All right, hopefully this was interesting, and that’s our ongoing work, and exploration into data oriented design using JavaScript. And there is a couple of links and so this is from a previous talk I gave and I have a ton—well not a ton but some amount of bonus slides if you happen to ask the presenter questions or if you are silent and so on. Any questions?
+
+ACE: I was wondering if you have had a look at the structs proposal because that is a proposal that is kind of like secret thing that developers eventually learn about javaScript objects like how to fix a set of fields and don’t delete them or change the prototype and this secret that you find out and brings up first language of citizens and you don’t do these things and it does not have the vectorization in this and array does not need to contain the same structs but there is a lot of over landscape and wondering what your thoughts are?
+
+AAI: Is that the Record and Tuple?
+
+ACE: not that one, but similar. It is a class but you cannot change the property or the prototype and the things are basic values.
+
+AAI: I have not seen that.
+
+ACE: You should check that, I think you will be really interested.
+
+AAI: That is definitely interesting. And I will hijack this to mention ABO who is one of the people working on this. And ABO has been thinking about implementing the records and tuples proposal and specifically with the idea if we can implement in nova why can’t any other engine do it?
+
+WH: How do you represent primitives and strings?
+
+AAI: Um, so they have their own tags. Currently our string is UTF-8 that is kind of by choice. Trying to see how big of a pain will it be to try to enter or pretend your UTF-8 will be a UTF-16 underneath and they will have heap vectors if they are heap data and we have integers on the stack, so we have a tag for integers and there is 7 bytes of data there which means we can represent all javaScript numbers on the stack without any heap data allocations but then when you go to doubles, then we do have a vector for those. So you will then have an index through the heap, which is where your float actually lives. Same thing for strings, basically.
+
+WH: When somebody constructs strings incrementally, do you construct a new string each time?
+
+AAI: Sorry I did not quite catch that question.
+
+WH: I am curious about incremental string construction.
+
+AAI: Yeah, um, currently, there is so little of the spec implemented that mostly this is like okay a future things to think about, but we do have a string concatenation implemented and a string while concat-ing and then we put it into the heap. At a new spring index and give you the index to that string with the proper string associated with it.
+
+BEL: I have a question and you talk about optimizing caching and so it's optimizing for space. So does that somehow imply that it is also optimizing for time? Because you are kind of trying to make like the most used parts are access most easily without ever using lookups?
+
+AAI: The performance of the engine is bad and it is very, very rough. But yes the idea kind of with the heap structure being vector-based and so on is that maybe this way, the important things like mapping over an array of items, doing a for each and those sort of things would be way faster because the things that you are normally doing are indeed those things that you the engine or the heap structure is optimized for.
+
+SFC: Um, yeah so when area, and I don’t know if you have experienced or explored, so when you load like a cache line of 64 bytes and you are trying to store say integers in that space, you can store many more integers with variance where you basically try to bit-pack them and squeeze them in bytes as possible and this is a downsize, and for example, if you get a rust slice of issues because they are not easier to choose. And so this will get into intention what is more important bit packing or alignment and the other project I recognize which is a Rust project and there is zero vec which will favor the bit packing opposed to alignment of things because at least what we found in many cases, it is not universally true, but in many cases, you will get better performance but stuffing stuff in fewer bytes is less expensive to read so you don’t use the cache as much and that is advantageous. And I don’t know if that is something that you have explored.
+
+AAI: No we have not. I do wonder like could we use it somewhere, so I said currently all of our integers javaScript integers, safe integers will stay on the stack entirely which is value of 64 bits at the moment. So we don’t have a heap vector of integers, where bit packing would become more, maybe more reasonable and can you actually index bit-pack into vectors?
+
+SFC: It depends. So I guess one way to think for example a JavaScript safe integers is what 64 bit and it is not all 64. It is not super compelling but maybe you can fit 9 of them in the space of 8 something like that but more compelling you have a visible length—if you have 64 byte and the first byte is basically telling you where to get the rest of it, right. So there is definitely room and for smaller integers this could be quite a bit more efficient and we tried to implement STM which is like you have like a whole bunch of like weights into few bytes as possible and storing them in smaller bytes in less space will make thing more efficient. And it is a win/win but it is faster. And so again, it is not universally true, there could be exceptions that you need to measure it, of course. And it is just maybe an extra path that maybe an interesting area of exploration.
+
+AAI: Yeah, I can’t think of easy places to apply it but it is definitely interesting. It might be useful. And one actually a potential place. Object-index or heap index is changing is a problem or becomes a real problem. And we might want to do a conversion where a heap indices are indirected, your index points to a vector that will point to the proprioceptor index, and then there the vector of indices might benefit from bit packing.
+
+SFC: I mean avoiding heap is like a huge win, and so with a heap, everything runs to a halt, and you can do that and to keep the heap on the stack as much as possible and if it is bit-pack and less efficient to read but keeping the heap on the stack is a huge win.
+
+RPR: Okay chip, you have 30 seconds until lunch.
+
+CM: So if I understand this correctly—and I am not in the least sure that I do: A lot of what you are doing is exploiting object homogeneity, and so when you have a piece of code that the produces objects, it will produce objects that all have the same shape because it is the same code each time (unless it is doing something weird). But another thing that is common is where the origin of the object is from something like jSON.parse, which is interpreting data that originated externally, and while in fact it might be generating homogeneous objects there is no way to expect that is happening and I am wondering how do you deal with the things like this—essentially characterize the object as extrinsic and it is not visible in the probe?
+
+AAI: The way we magic around this is the elements vectors that I mentioned. So we have a—or actually don’t have at the moment, and smallest element vectors is values of like object up to 16 values, properties. But I will take that down to 2 and 4 and 8, probably. So that the low end is kind of packed and now if you are parsing a JSON file, and you find objects that has two T’s and they will end up in the vector of two valued.
+
+CM: So the interesting characteristic is how many properties rather than their names, so you can have one object with ABCD and another with DEFG and in your world they are the same shape—okay that is interesting.
+
+RPR: Okay we got through the queue. And perfect timing. And so say that thank you AAI.
+
+AAI: Thank you everyone and I said it was very nice to be invited and I will hopefully be joining lunch with you because I have missed my train.
+
+(lunch)
+
+## Smart units progress update
+
+Presenter: Ben Allen (BAN)
+
+- [proposal](https://github.com/tc39/proposal-smart-unit-preferences)
+- [slides](https://docs.google.com/presentation/d/1WCdpcX4IpObi0CD1ftXA9QbZL5RSEGlYGXdqw3EfIdg/)
+
+BAN: We are revisiting, rather a proposal from several years ago. At the stage of figuring out the design of it. So the main purpose of this presentation to get feedback from the folks from here on the direction we should go. Smart units
+
+BAN: Smart units, the short version that you will see more on a later slide, is localizing measurements to the preferred measurement unit inside scales, in precision for locale. The goals just to simply discuss which of these localization is significant. Which is solvable and best solved by ECMA402. And there’s many different places where we could split off parts of this proposal. And getting a sense of whether or not people consider that appropriate, consider that the best way to proceed, would be really useful.
+
+BAN: A non-goal is to convince you any particular part of this problem must be addressed or in a particular way. This talk is more about the discussion afterward that is necessarily about the slides. I will get through to the slide as quickly as possible. If I find myself going too fast, feel free to give some sort of subtle gesture indicating, hey. Turn it down a little bit.
+
+BAN: All right. So here’s the problem: properly localizing content requires properly localizing measurements. It’s not just about using, say, a dot versus a comma for unit separators. And because content with non-localized values, numbers in unfamiliar scales or that are otherwise not formed the way they are, worse case, incomprehensive to users. I was going to rank those two problems. But the second problem is that content with non-localized values can seem strange to users, that is a series problem, with content incomprehensive to users. This is real problem. One of the great things I like talk about this sort of thing, is there’s all these great memes about it. Thank you. This comic I like because something towards the end of the talk. The final panel there, it also shows—indicates personal problems related to United States' aggressive cultural imperialism. All right.
+
+BAN: I am resisting the urge to read that joke out loud. But everyone can see it and it’s funny. You can see it was definitely written by an American. The fact that guy is describing the height is with that level of precision, that’s a real tell it was an American. Also, the threats delivered at the end.
+
+BAN: But if you look at that comic strip, the problem is primary, well, okay. Some regions use metric and some imperial. Sometimes the measurement units that are used in a given region vary based on the type of thing being measured. And this most often occurs in the commonwealth countries that use both, depending on the context. And also, the other problem is well, sometimes measurement units actually vary on them itself. They use different scales than smaller ones. It’s cross-culturally, the precision of a given numerical very well for a distance, say. It depends on, well, the size of the value itself. Typically, if you are dealing with a value that’s in thousands of kilometres, you are not going to be the precision isn’t going to be single digit—essentially, you typically don’t want to represent something that is about 1,000 kilometers as 1, 003 kilometers, unless you’re in a scientific context. Here is my other really fun meme about this. It’s one of the reasons why—no. No. I am going to tell the truth here. This meme is one of the reasons why I was excited to work on this proposal.
+
+BAN: So the—as I said earlier, the countries that tend to use a complicated miss mash of units are commonwealth countries. This shows up in Canada and India. But if you are in Canada, and you want to seem like a Canadian, typically—side bar: a lot of the stuff in this talk I am saying things about how people do things in different cultures. And for most of those, the information I have on that is Wikipedia and CLDR. This information could be wrong. So I apologize if I am lying about your language. Please, correct me so I can then go and try to correct CLDR.
+
+BAN: But so… At least according to this meme, if you want to seem like a proper Canadian, and you are giving a temperature use for cooking, you are going to be doing that in Fahrenheit rather than Celsius. If you do that this Celsius, you are not from here. At the bottom, if it’s related to work, use metric and not related to work, use imperial.
+
+BAN: I alluded to this before: the precision that is used in formatting is itself context, locale and quantity dependent. So the problem isn’t simply that some places use metric and some places imperial and you want to respect that. The complexities are essentially trackal. And values that are true precision, to precise, for example, the height given in the first panel of the comic at the start they are not localized. They make it seem like you are not from around here. It’s weird to claim 1822.88 centimeters on your dating app profile. A lot of people claim to be 6’tall. If you people lie on the dating profiles, the lies should be appropriately localized.
+
+BAN: Okay. So that preamble aside. The goal is, okay, it’s not possible to design an internationalization API to make all measurement localization automatic in all cases. We are not proposing something to all things to all people. Instead, to make the most common cases easy to localize. I don’t know. This is an analogy that probably—seems like resonated with me than anyone I talked to. But this sort of investigation of lenses from the standard measuring practices used, most often, I think, come up in context that are particularly important.
+
+BAN: And sort of analogy is, the most commonly used verbs in languages have irregular stuff. In our day-to-day the measurements we have been doing as people in our day-to-day for a hundred years will use different measuring systems for dramatically different measuring systems than other types. The places they occur are important
+
+BAN: Again, correct me if I am wrong, in India, most measurements are using the metric system. The height of a person, feet and inches is more commonly used. So that results in problems, but also some did you understand in the EU prefer using the height of a person of a mixture of metres and centimetres rather than simply metres. In this case, representing a height as centimetres, it’s going to seen as, this is probably written by someone who isn’t from here.
+
+BAN: Here are some other quick examples. One place where this often shows up is with fuel consumption. So feel consumption in Canada for cars is actually most miles for imperial gasoline. And all summary one and a significant one is most regions will measure feel consumption in terms of litre per hundred kilometres. But many use litre per kilometre instead. So that’s less of a comprehensibility problem, but something that cause us to trip up when reading text.
+
+BAN: I will go through these fairly quickly. So tell me if this is wrong. In Finland, the wind speed measurements given in metres per second.
+
+EAO: Yes. It’s correct.
+
+BAN: Okay. And yeah. This is a fun one. This is the Scandinavian mile used in informal context. This is 10 metres. From the outside, that seems like it’s primarily a thing that exists to play pranks. But—yeah. [sop] of these context are actually formal context. The example that I have seen is that there are tax forms in Sweden related to travel reimbursements that relate to the Scandinavian mile. Many regions will use the measurements scales for lengths of infants and the height for adults. In Canada, if you describe the height of a person, you use feet and inks. The 6-month-old, you would do it in inches.
+
+BAN: Okay. So these are some contexts in which might be particularly important to localize content such that it respects the different cultural preference about. So weather applications, I think health-related applications might be typically important. Any sort of large city hospital system will have patients from hundreds of places. And it’s very, very important to have medical information available to people in a form they can read themselves. Another one I just sort of thought up about, let’s see… During another conversation about what I want say 12 hours ago, one thing with environmental organizing in the United States is that people will not take temperature values given in Celsius 3°C is no big deal and 5.5 Fahrenheit is a big deal. Even though they are the same value
+
+BAN: Fortunately, all of the data to build an internationalization API is also in CLDR. I mean, it is CLDR data. It’s not necessarily reliable. But information on context and quantity dependent units for all locales in CLDR. And also the constants used for conversions. This is supplemental/units.xml. We’re dealing with about 8 kilobytes of data. There are some screenshots of these conversion constants. Pretty much all of the relevant ones and some fairly obscure ones which themselves could be relevant. Here is information that they have on which locales use which things. Again, the most common suspects are the United States and commonwealth countries. But there are variances in this sort of thing that aren’t directly related to metric versus imperial, and that occur in a lot of places.
+
+BAN: And this is just a screenshot of some of the text on precision based on the size of the value. So that “geq” property that is a threshold, above the threshold, and if the value is above the threshold and then—there’s no other threshold above it that hasn’t crossed, it will view a specific precision and sometimes units change too. For example, in Great Britain, you will typically give distances in yards. Until you get up to half a mile. Then you start giving distances in miles.
+
+BAN: The goal to provide users with content that uses comprehensible and cultural skills and only when localizing the measurements can be done reliably and deterministically. As an example of a thing we wouldn’t localize is clothing size. Measurements related to clothing size would be really, really handy to convert between European and American measurements. Something that claims to be one size from one company can be nothing like that size, the equivalent size from another company. Sometimes even manufacturers will internally have inconsistent styles because it’s not actually reliable. It’s not possible to say what a given clothing size is. Even though it’s really handy to be able to localize documents, localize content, it uses the appropriate clothing measurement scale for each locale. That’s something that is not on the table. Maybe someday in the far future, manufacturers will standardize clothing sizes, but not any time soon.
+
+BAN: Okay. Another non-goal is, we don’t need to support every possible context-related measurement system variation. So, for example, it’s very frequent, I know this happens in the UK, I suspect this happens in other places. The measurement scale that you use for measuring quantities of beer differs from the measurement scale used for other liquids. So again, often foods—ingredients used in recipes will have idiosyncratic scales. We don’t have to support all of them and have to fully measure like a Canadian. We don’t have to have a way to flag something as a pool temperature that should be there for representing Fahrenheit.
+
+BAN: And another non-goal is you probably went and looked at units.xml. There’s a lot of stuff in there. We don’t have to support everything. The goal here is to solve common localization problems, better some combination of common and important. Okay. We also have—I lost my place.
+
+BAN: This is important, though. When we get back to the actual content of the API, what we’re going to talk about is not any sort of object for doing unit conversions. We do not want to provide a tool to do arbitrary unit conventions. If we were, it’s in the context of 262. Rather than 402. We are simply providing something through localized measurements. And we want to discourage its use for purposes other than internationalization. Because typically, and there’s people in this room who worked on 402 for a long time who have very, very specific examples, it’s relative objects to do non-internationalization work and that results in code that is breakable.
+
+BAN: So using this for general purposes conventions is not our goal. Instead, our goal is localizing documents for specific locales.
+
+BAN: Okay. Don’t need to support everything. Okay. One convenient non-problem, you could have an API for doing all of those conversions. We talked about so far… without providing any new fringe printing surfaces. It uses information that is already revealed. Specifically the locale that is being requested. And so yeah. Expressing these preferences and receiving documents in this way, doesn’t require revealing things about yourself. This does not provide new fingerprinting services.
+
+BAN: Okay. So the proposed API is simply: add a usage parameter to Intl.NumberFormat that takes a string representing the usage. This string, it could be the name of it, in CLDR, so if you are formatting a person height, include the—the property person-height. The output and the rounding will be determined by what was the input unit, what locale, what was the value, which matters because the appropriate and what locale. Here is just a bashed together—this is what it looks like to use it.
+
+BAN: You will notice something, so the expected I couldn’t tell would be 5 feet and 11 inches. That value is a little bit under 5 feet 11 I think. If it were—this is not properly localized because that’s not something that anyone would ever say. [?? This seems mistranscribed]
+
+BAN: Okay. So there are a number of questions we need help with and this is the point of this talk. Question number 1: you might have noticed a lot of the preferred measurement scales for a lot of values most especially as we have seen that person height tends to be given in mixed units. So if I recall from the very, very, very, very start of the talk, the American, the thing this gets the American in the comic really mad, is given a height as 74 inches. Because very few Americans hear 74 inches and know how tall that is without doing mental math. This requires supporting mixed units
+
+BAN: Question number 2: I said at the start. That refers to the topic of United States' cultural imperialism. We don’t need to solve all the problems caused by the United States. But we do have one big problem. A lot of people—what’s a good number? 70% of Mozilla users in Indonesia use the en-US. It’s possible that’s an outlier. Worldwide there are a lot of people who use that localization to the browser. And because the United States uses a unit system that is just truly baffling to most people, we don’t want to show people site content tailored for a measurement system that is not to the rest of the world. This is a real problem. I am interested in hearing people’s ideas on how to deal with this problem or whether the problem is available with.
+
+BAN: My main thought or first thought was, well, this is something where there needs to be a permission pop-up. This seems silly, can we use your camera? Use your microphone? Do you really want to use feet and inches? Okay. This is something that has come up with conversations we have had mostly this week. This could maybe be used as a sort of site for user education. So if a pop-up says, you can provide the option of downloading something that is tailored for your region.
+
+BAN: And question number 3. So we are proposing adding something to Intl, doing calculations. Should this be something that happens in 402? I think it should. And my sense of that is, NumberFormat is supposed to format numbers. We need to properly format those numbers to properly localize content. Otherwise, you get content that is kind of localized to your locale. But you can’t read any of the figures. So properly formatting a number can sometimes require doing calculations. And perhaps we should do the calculations that are needed for well formatted, well localized output and nothing else. Do everything we can do to keep people from misusing this for non-internationalization purposes. No way to specify the locale, to specify precisions not used in the locale and so forth. It’s not something doing generalized unit conversion. But formatting numbers for different locales.
+
+BAN: That said, sort of the first idea that I have seen people have when I go reading through notes on the history of this proposal is, well, why don’t we just separate out of the conversions. Have it so people can flag given quality as the height of the person or as the distance travelled by road. And then let some other library handle it. I am thinking that doing it with 402 would let us better prevent this use. Providing the tools to do exactly what is needed for localization and nothing more than relying on one library to do it.
+
+BAN: And then there’s questions we need help with, number 4, all of the questions we haven’t taught to ask. So at this point, I am going to throw it to the group for questions. I found since I started working on this, that this topic is a wonderful conversation starter. People like to think about the ways things are measured in their region. I would like to throw it over to you.
+
+MF: Thank you for that fantastic presentation. You have already answered many of the questions I wanted to ask, which is great. And I especially appreciate the focus on doing as much as we can but not necessarily trying to hit every single possible thing that we could. So a question I have remaining is about discoverability. Who is the target user of this API? Is it a professional in localization? Is it the average developer? I ask that because if we want just anybody to be able to use this and properly describe the units they're working with, how would they know what kind of granularity to describe things at and what is available for them to describe about it?
+
+BAN: Okay. There are people in this room who have thought much more deeply than me. My sort of shallow thought is, and roughly that same user category. It should be on MDN. There are better answers, significantly better. I am going to stand here awkwardly and wait for someone else to respond. I promise you, I can make it particularly awkward.
+
+DE: I think MDN is a good answer for this question. And in particular, there’s a lot of surface area already for NumberFormat. MDN docs are pretty good, it’s possible to contribute more examples and things like that for the particular set of units that support it. Given that everyone’s working from CLDR, all the implementations, I think it would probably make sense to include the list of supported units in MDN, because you can’t just sort of make up a unit and expect it to work. So I guess I kind of want to ask behind that question: is there any change that we should be considering to the API based on making it more discoverable? I like the shape presented, but maybe other people have other ideas
+
+BAN: What I was aiming for is the absolute minimal change, so the absolute minimal change. Just adding one option to NumberFormat.
+
+MAG: So thank you for linking to the actual units file. Peeking through it, it’s interesting to see what is included, who is included, what is not included. It’s clearly been maintained in a pretty spotty, like, ad hoc fashion. So I wanted to know a little bit about—so how was this being maintained and then as it evolves, how do you see uptake happening through committees of like additions to this? So, for example, like, there’s no electric field fuel consumption there. I file a pull request on CLDR. And then when could somebody use it in a browser?
+
+BAN: Right. That’s an implementation context—sort of like an implementation question relating get into IC4X getting browsers to use it and so forth. That from the context that there’s pull requests to be put in and I see. A lot of ones I don’t see. If you look at that data, there is a lot of fine grain detail related to imperial units related to UK and much less fine grain related… This is a difficult problem. One of the reasons I like this proposal, one, it improves localization. 2, it gives sort of a locus for improving our localization practices.
+
+BAN: It’s a good reason to reach out to folks that are not currently represented in the CLDR database, not currently participating, and reaching out to people and helping people get into CLDR. Instead of making it an Anglo-centric document in a lot of ways.
+
+SFC: The original proposal that added unit formatting, which was 2019, 2020, we picked a subset of exactly 45 units from the CLDR, that are currently supported in NumberFormat for unit formatting. And since then, we have basically left the door open for people to file issues. Why is this unit not supported? We received about a dozen requests since then, some more motivated than others to get a signal for which ones we leave that that might be useful for people. One thing that came up, in order to build an interoperable intl API, it’s important to have that list. If you don’t have that list, then basically, everyone—all engines have to do what Chrome does. It’s important to have that list because if you don’t have the list and Chrome supports a different set than Firefox or than Boa, then all the others, then like it makes it basically impossible to write interoperable code which is the point of our work. I think that with this case we will probably use some metric to select what subset of unit contexts are most relevant and useful to people on the web platform. Right? There’s some that I think that are definitely likely to be included, person height, those are popular. Also other context that wouldn’t be included. You know, and then there’s what's in the grey. Establishing some metric for how we decide what to include and not include is important. We established that metric back when we picked the 45. We should do the same thing here.
+
+BAN: One thing like there’s a number of those values and units in—yeah. That are related essentially to scientific calculations. And we want to avoid those at all costs. Because it’s an internationalization tool, rather than something that is to be used for scientific calculations. I was thinking, there’s one odd ball one in here that might be worth including. One example is there are Differences in how countries measure blood glucose. Not commonly used, but for the people for whom its support is extremely important.
+
+USA: Yeah. On the topic of how often we update with respect to that, every engine has its own kind of their own strategy, and most of the engines that I have come across use ICU to supply CLDR data. Boa uses ICU4X. But Google, Chrome and—Chromium and Firefox both package a version of ICU they upgrade. And Safari uses the system ICU. Every system update could—have information.
+
+MAG: So Shane, you said that there was a metric when you do the first 45. What was the metric you chose? How do you figure out what the 45 were?
+
+SFC: So the way we—the metric we used at the time and—was that we looked at what were the common quantities to measure, length, time, value and so forth and looked at what are the units used for measuring the quantities and the locales and that’s the set we got with the 45. That’s how we derived 45.
+
+SFC: We looked at—we actually used some data from units from XML, we looked at all—at all regions. And, for example, the 45 units include all units used for measuring weight and measuring distance in all locales. So basically, the quantities to measure, once we picked the quantities, we picked the units based on the qualities, based on the localeData. We don’t have all length units, but we do have the length units used for measuring things. According to units at XML. It’s not a perfect metric, but it got the job done.
+
+EAO: So one of my main concerns with this proposal is not at all the internationalization parts, but the usages separate from internationalization. A lot is about being to have input 1.8 metres and from there, get an output like 5 feet and 11 inches. And JavaScript developers will see that as a built-in unit conversion tool for which they will absolutely have use cases beyond localization and internationalization, and they will use this tool, however, they see fit. So my concern is that if we are defining this sort of capability to be happening internally, in NumberFormat, we need to account for this use or abuse by making it work really well also for unit conversion for by providing some other alternative way of getting the unit conversion output out of this.
+
+EAO: So I think that needs to be a part of the proposal, from the get-go, before it can really go further. If we need to—because this seems like it’s a lot and it might depend on a couple different parts that could be packaged separately. I would think that a unit conversion which is not necessarily an ECMA402 thing, part of this should be one of the first parts of the whole proposal, to proceed.
+
+BAN: And my sort of—my sort of naive take there is that one, it should be as useless as possible as a generalized unit converter. Essentially, the way I’ve been trying to think of it is that this is something that should provide a format unit and as little capability as possible for generalized number conversion. That said, so, for example, not being able to specify precisions outside of what is appropriate and so forth. That thought is, yes, of course, people will misuse it. So—had he been. What was the exact wording of the second part of the question?
+
+EAO: So given that JavaScript developers have real world needs for unit conversion, they will use an API even if it’s clumsy, to do unit conversion, if that is a thing that is packaged into the standard library
+
+BAN: So the numerical values for people who want to use it to abuse it that way, the numerical values are currently available via formatToParts. They could be surfaced in additional ways as well. Yeah.
+
+EAO: Yes it’s possible to use formatted parts for this. But formatted parts is not the API that one would use if we were developing a unit conversion API. And given that part—this proposal includes within itself what is effectively a unit conversion API we should include within this a good unit conversion API, not one that we possibly make as hard as possible to use
+
+BAN: I see. That goes beyond things like providing methods to pull out numbers, then using formatToParts.
+
+EAO: I am not offering an opinion on the shape of the API should be. I am stating that there needs to be an API that is not an internationalization API or, like a null locale, provides an explicit way to say if you do unit conversion, this is how you get unit conversion out of this thing that also does unit conversion.
+
+BAN: I see. So there’s no sort of like clean way to keep it as something that is 402, rather than outside of 402
+
+EAO: If you provide a tool that does unit conversion it’s used for unit conversion.
+
+SFC: Yeah. So we have definitely learned as Ben noted in his presentation and Eemeli just highlighted when we add things that do functionality that is in the exposed in a nice way, people use and abuse it. For example, people love to use `Intl.DateTimeFormat` to do timezone conversions, as it’s the only way in the platform to do that. Only way to do calendar conversions. They love Intl to do that for them. And like if we have unit conversion here, people are going to try to use it, which means which—one thing we learned to approach this consciously, and be like, well, we are going to have unit conversions. Programmers can abuse it to do their own unit conversion, they are not doing formatting. We have approached it knowing that this is just going to be the way. So I think that formatting is stable formatting. We don’t need to support high precision unit conversion. Because we have full control over how the units are rounded. We can round them by the XML rules for rounding. This can’t be used for unit conversion. If you want to abuse it, the best is maybe rounded to the nearest integer or whatever they are in unit inside XML. I know like use stable—so basically, like—we don’t have to expose a full powered unit conversion API. Only expose a way you do unit conversions in ways that are intended for end-user consumption and people want to use table formatting, that data accessing the numbers and results, they can. It’s not going to serve most cases for—people using conversions—they should still continue to bring their own library to do that.
+
+BAN: I wanted to add, there’s two different paradigms at play. I like that one that you are giving. Specifically, because it’s not just that oh well. If people use this for generalized unit conversion there isn’t anything more precise than the integer. If people use this for generalization for distances, they will not get anything precise to the integer, and get anything more precise than a power of about 10. And then above a threshold, and so forth and so forth. Yeah, the limitations that can be imposed to—it is possible to have limitations for this that make it very useful for internationalization. Users will find lots of ways to use anything.
+
+DLM: First of all, I would like to thank you for putting together the presentation. I think you did a good job making a case for this. I did appreciate your decision chart, it’s radically simplified in a lot of ways. One area where I think it is important to note that and I haven’t had the chance to look at this, anything related to construction is imperial in Canada. So lumber is in feet and inches. If you are ordering soil or anything, it’s sold by the cubic yard. And I think that may be an important use case, something we want to look at for CLDR. I am not sure.
+
+DLM: I appreciate the API design as well. Like that feels minimal and the person height use case that you presented, I feel like that is actually fairly motivating. It’s easy and I had some fun going through a rabbit hole but this is sufficiently useful and not intrusive as an API. The one thing to question you about is unit pricing. Say, I am selling coffee or something like that. This API would let me show in pounds or grams, depending on the user. I could see that being a place where someone might also want to present how much of this is going to cost per 100 grams or per pound. That might be a place where people are tempted to parse the output.
+
+BAN: There’s probably a counterexample I am not thinking of. It’s something that would be absolutely beyond the scope of this anyway because their conversions—if you are localizing a document to show grams, or and also a locale that uses pounds and ounces, those locales will use different currencies. I think the thing we were talking about right before this, this would be a more severe problem if there was any chance at any time and anything like the near future that the UK uses Euros. So that problem becomes a particularly sharp problem when you’re in regions that use different measurements scales and different units to represent weights, for example, or lengths. And also, the same currency. So yeah. That said, also, show—so 1., it’s something we couldn’t do with this anyway. 2, people will pull out the data to do the currency conversion which is a problem we are discussing in general. 3, there is something I would like to add. I should have looked more closely at `unit.xml`, which includes a lot of constants for things that are in units per something. So some of that—things that could be useful for that are already in CLDR.
+
+SFC: The last time we raised this to committee was 4 years ago. I think it was in the Hawaii meeting. We presented this, and one of the ones later that year, it’s been about 4 years since we presented this to the committee. And one of the big pieces of feedback we got at that time was this problem of—it’s a problem that Ben highlighted in the presentation, question number 2 in the presentation, which was, a lot of people use en-US is the localization locale. They get US units. Did we solve this problem? We spent a lot of time investigating this problem and we—there is also a comment on the queue earlier that I have also heard around which is that units can also change and drift over time. Sometimes it depends on what age you are, what province you grew up in. It’s not region-based either. So like when he—basically, we can figure out what the units—like should be, we can use whatever information we have which is basically your locale, your language region. If you have fall backs listed, we use that to be slightly smarter. But it’s never going to be perfect. So the best this can really do is serve as an approximation, a best guess for what the units should be.
+
+SFC: So the topic that I have is called snap poll involving units, thanks to Eemeli we had a really wonderful community event on Monday, that attracted close to a hundred local Finns who are JavaScript developers. And Eemeli gave me about 7 minutes of meeting time so sort of like, you know, give a give pitch for the smart audience. Is this proposal useful for this? Would you use this? Would you use it to spite like not having very good personalization of units and being able to guess and infer? And not a scientific poll by any means. But it was approximately about 70% by visual inspection. You know, raised their hand. Is this a useful proposal for? 70% is good. Would you use this despite the understanding that like anyone who has eu-US, it went to 40%. It’s motivated and useful. But there definitely is this gap where a lot of developers see this as, you know, like a proposal you’vesful, but it’s going to give—favor ENUS localization, that could be something that makes them less motivated. That’s why we looked over the last, few years like can we add locale extensions to the [step] language? Do other ways to sort of have a way that these units can be localized in a better way? And this is sort where the locale intention proposal came from and we have investigated that space a lot. I think there could be a room in that area. But it’s definitely been an area where we faced a lot of resistance in order to get that direction adopted. That’s why it’s good to take a step back and go back to the smart units proposal. I think the smart units proposal is well motivated by itself and I also think that locale extensions would be motivated if smart units was in the language. There’s small things we could do with locale intentions. Adding en-US with world units, one extra local locale and solve the biggest chunk of pain point. So I think there’s definitely room to improve this. So yeah. That’s my topic.
+
+SFC: The short version; I think this proposal is still motivated and hopefully move forward and if anyone raises concerns, I think this proposal is fundamentally flawed because of this en-US issue, my response; I think it’s motivated enough and I do hope to solve the en-US issue in the future, once this lands.
+
+BAN: I wanted to add: the problem with this is, yeah. People in a given locale, based on the age or subregion, can have preferences that differ from the preferences that CLDR have. And the preferences that are more generally common in their locale. So this is where we start rubbing against the line between localization and personalization. That said, presumably, the data we are using will continuously be updated and CLDR 2024 is not the same as CLDR 2022 and that’s good. For the purposes of improving localization. But also, in a weird way, it discourages people from using it as a number converter. In the units used in Canada for, let’s see, what is good example? Over time, fewer and fewer people use feet and inches to describe length values in Canada, then well, from year to year what the output of the—the output of number format will change. And that will kind of punish anyone who is using it as a unit converter. I have worded that incredibly awkwardly. But yeah. I would agree that the central problem seems to be the en-US problem, to me. And I really, really like to get a sense of how other people feel about that particular one.
+
+JGT: So having gone through some of the pain that Shane talked about, about people misuse the API and this seems like an incredibly difficult problem to solve. I mean, simply formatting dates or numbers without numbers as is, is incredibly hard and incredibly abused. And so one thing that occurs to me, certainly when you like at why developers are excited about Temporal, one of the main reasons, they won't have to ship a 50k library with their app. And one thing that occurred to me is it might be difficult—it’s going to go large time to succeed for a platform technology, but something you could do is help libraries. So instead of shipping a 50k unit conversion library, you could ship a 2k unit conversion library and provide the underlying plumbing to allow the libraries to be successful. You might want to consider: instead of going all the way in the last mile to the end-user developer, could you do something that gets most of the way there, but you rely on user-land libraries to change more frequently and won’t be the kind of thing where you can never change the Swedish date format because it’s the only way to get ISO 8601.
+
+BAN: There’s sort of a minimal version of this that would simply be the capacity to supply what the usage is of a given number so the library can pick it up. I don’t know. It resonates with me because if we are doing exactly what we can do, in 402, aiming for smaller rather than larger, that could actually reduce misuse. Rather than sort of like, we’re trusting a library to handle all the conversions.
+
+SFC: Yeah. Next on the queue, I think it’s a good point. It’s something that we haven’t really spent as much time investigating as we probably should have. Not for this but for other types of libraries. API `Intl.getUnitConversionFactor`, it doesn’t do any conversion, which gives us the data in a way to package is interesting. I also worry it opens a can of worms. We should do that for basically everything else in Intl. Why are we doing this for unit conversion factors but everything else in intl? Right? I would eventually like to move in the direction of being able to have data-driven APIs. That’s no secret.
+
+SFC: The last question was, Ben brought 3 questions to the committee and we haven’t answered any of them which I think is kind of bad. Because we came here with questions. We didn’t get answers to them, especially question 1.
+
+RPR: I think we should begin summarizing now
+
+### Summary / Conclusion
+
+BAN: Yeah. I mean, it is something, remarkably difficult to summarize. Because it seems to me that the sense in the room, is, oh, yes, this is something that would be very useful and the sort of pitfalls that we’ve identified are in fact serious pitfalls. So, like, what, in that case, what is a summary? What’s a non-objectionable summary that the room expresses interest in, has serious concerns how to or how not to make it useful as a generals all conversion today. It is unsettled whether or not it should have a minimal version and did not do the conversions but applies the plumbing for doing the conversions. That is very, very, unsummarized summary.
+
+RPR: Okay, and are there any next steps you would like to state?
+
+BAN: Oh, jeez, that’s a fine question. It seems like next steps might be determining specifically the things that are going to be supported, and additional work on ways to prevent PE problem.
+
+RPR: If people want to work with you on this, did you have any venue that you’re running this in? Or should they just DM you?
+
+BAN: So, there’s, I set up my repo on this. Currently the agenda as a link to the repo from about five years ago. But yes, I will update that link to my repo. And I’ll raise questions with the various people about performing.
+
+## Cancellation: Room for improvement
+
+Presenter: Daniel Ehrenberg (DE)
+
+- [slides](https://docs.google.com/presentation/d/1ge28UQnISRaDfHp5IFz1XNAJnQm2aGjaqZJeAY3fk5Q/)
+
+DE: I’ll be going through the presentation because RBN isn’t feeling so well. But he helped a lot with developing this content. It has been several years since we discussed cancellation, or previously cancellable promises. I wanted to review the current state of the world and what problems are solved and aren’t solved so we can think about what should come next.
+
+DE: Cancellation mechanisms should help avoid unnecessary work. Like if you do a fetch and you no longer care about its results, then you can cancel it and then if it hasn’t already completed, maybe that will allow some network bandwidth to be used for something else. It can also be important to save memory by unlinking data structures, especially by taking certain callbacks and making them not referenced from certain places. Cancellation is often used in a certain nesting structure, which I will get into later.
+
+DE: With cancellation, there is an important factor, separation of concerns. There’s both something that’s controlling the cancellation, a source of it. And there’s several things that might be listening to that, so they are cancelled when the cancellation is triggered. In order to actually make cancellation be used—in order to actually avoid the unnecessary work, it has to be threaded through a lot of operations. So ergonomics are important. I would like to solve the problem of having things we create in TC39 be able to work with a cancellation mechanism, which isn’t something that we currently have been able to do.
+
+DE: The current cancellation mechanism in the web platform is `AbortController`, `AbortSignal`. There are two kind of capabilities, two objects. One is the controller which you can go and abort. `controller.abort()`. And the other is the signal that you pass down to things, which will be aborted when it’s canceled.
+
+DE: There are also nice convenience methods in WHATWG DOM or HTML. `AbortSignal.timeout` is a shortcut, you don’t have to make a controller, do the setTimeout, abort the `AbortController`. This gives you a pre-made thing, because it is quite a common operation. That lets you have a fetch with the timeout as `fetch(..., { signal: AbortSignal.timeout(5000) })`
+
+DE: `AbortSignal.any` is a new capability which allows several `AbortSignal`s to be combined and triggered when any of them aborts. It turns out you cannot implement this in pure JS because of abort itself because of memory issues. Here is an example of an API that takes a signal, an `AbortSignal` as an argument to make something happen when the abort signal is canceled, you use `addEventListener("abort", ..)`. so then if you want to use it, you can pass in that signal and, you know, it works just like with fetch.
+
+DE: Let’s revisit how well `AbortController` and `AbortSignal` meet the goals. This is not a TC39 API, but an API we have discussed previously in TC39, or a problem space we discussed several years ago. I want to take this discussion to WHATWG as well. But if it is of importance to JavaScript also, so I wanted to discuss it here also.
+
+DE: We want to avoid unnecessary work with `AbortController`. The .abort() method expresses this “don’t care” property. Propagate this message, you propagate the abort signal as a manner, to register the event handler in the abort signal as in the previous examples. For room for improvement, comparing this to other cancellation API, such as the Bluebird promise cancellation, there is kind of general need for a “reference counting” mechanism where you have several effective things that express interest, when the reference count equals zero, if the various things kind of end up being uninterested it gets canceled. I think this would actually fit well with signals. This is something I want to prove out over time. But the signal unwatch call back that computeds can take, may be a good way to model this, kind of naturally falling out. That’s “citation needed”, but I think this can compose well.
+
+DE: Unlinking data structures: You may think that JavaScript garbage collection means that developers don’t need to about disposing of resources all of the time. But actually the way that web frameworks work where they have, their event handlers, they are always thinking about disposing of resources, because, for example, when something gets removed from the UI, you have to unlink the event handlers from that, otherwise there will be various sort of cyclic and partially connected data structures and things will be held alive longer, even just in memory, even if nothing ever gets triggered on them. The way addEventListener takes an `AbortSignal` as a parameter is a pretty nice API to compose with all of this. Because several things can be passed to the abort signals and provide as way to unsubscribe as a group.
+
+DE: Room for improvement in unlinking data structures: RBN has point out, sometimes when you have a cancellation it gets sort of past the point of new return when it is not going to be canceled. Memory is potentially wasted when we hold on to the canceling reaction. Those event listeners for things that won’t be triggered. And even if you do trigger cancellation, you better pass `{ once: true }` to the reaction so that disposes of itself, and again, examples in MDN don’t even do that. There is currently no API to drop all cancellation reactions related to an `AbortController`.
+
+DE: Nesting structure: `AbortSignal.any` allows us to nest cancellation as a tree. And it might be a little bit upside down from how you would initially picture a tree. The idea is you have one `AbortSignal` and `AbortController` kind of at the top. And each child of it consists of making a new `AbortController` for cancelling this subtree, and then use `AbortSignal.any([the parameter, the controller’s signal])` as your signal for work in this subtree. So then, if the parent is canceled, it will cancel all of its descendants, but also, you can cancel just a particular subtree using their own `AbortController`.
+
+DE: So if you want to assemble a program, this could potentially be a DOM tree. Or it could be a call tree of basic functions. You often have pieces within there that have their own reason for canceling or you may cancel the whole broader thing. This is a common structure, it is important to have `AbortSignal.any` as a non-leaky way for this. But the `AbortSignal.any` pattern may be difficult for developers to understand. Maybe we want a convenience API for this common tree case.
+
+DE: Separation of concerns. We discussed `AbortController` triggers cancel and it reacts to the cancel. From `AbortController`s, you can actually get the `AbortSignal`. So, having an `AbortController` implies having the `AbortSignal` capability, but not the other way around. That is good. It is important to have that separation.
+
+DE: But `AbortSignal` being based on EventTarget permits various different kinds of accidental misuse. I mentioned the way that you really should be passing { once: true }, there’s another thing which is that you can just call `dispatchEvent` on an `AbortSignal` and simulate an abort event even though that should be outside of what your capability can do. That should only be done by the `AbortController`.
+
+DE: And actually, inside of the DOM, there is a built-in mechanism for reactions which doesn’t go through all of this. That separation is actually observable in sort of the ordering of when things run. Further, you can also override `onabort` from an `AbortSignal`. So it should be when you pass `AbortSignal` into things, they have the ability to subscribe to the cancellation, but they don’t have the ability to unsubscribe other people. But by overriding onabort, you do end up potentially being able to do that [if they failed to use `addEventListener` properly].
+
+DE: Another question is how much will it actually be adopted by applications? This is one where, I’m not sure if between RBN and I we have a fully common answer yet. But it is something that I personally care about. I think, we would get higher adoption of `AbortSignal` and `AbortController` if we pass it through implicitly. Right now, every layer of the call stack has to pass the `AbortSignal` through. There’s a convention to use an options bag. This gives some potential for adoption. And I have been seeing some more adoption and some frameworks doing more to pass it through.
+
+DE: But if we can use an AsyncContext variable, maybe, one that is provided by the platform and the variability isn’t exposed, but it is just set by the platform to hold the current ambient `AbortSignal` and be read by operations maybe if they opt into reading through this signal inherit option.—RBN was explaining that we don’t want new programs with new APIs, but with this argued it is still easier to use this than explicitly adopting abort signals down in the graph.
+
+DE: Finally, we currently have not been able to use cancellation from TC39 proposals. We’ve been chatting informally about TC39-defined timers in Matrix, informally. You may want to unsubscribe from those. Another thing is signals: the proposed API does have unsubscription through this unwatch API, but could not use `AbortSignal` because it is a web API.
+
+DE: There are a few possible strategies. One is do everything in the web platform. I think there is value in TC39, we have good integration into ihe language that enable cancellation functions if those make sense. We have a good connection with the community and diligent about doing things, I hope we can come up with a way this group can also define cancelable APIs.
+
+DE: One strategy is to define host hooks such that `AbortController` and Abort Signal remain web APIs, but we can accept `AbortSignal` as an argument to TC39-defined APIs and react to cancellation. That feels a little uncomfortable to me, that means that JavaScript needs that web API, but an option to consider.
+
+DE: The other one is to a move `AbortController` and `AbortSignal` to TC39, but that seems messier given the EventTarget integration. Making that happen about a way it is not observable changes it seems difficult and very messy.
+
+DE: Or we can even consider defining a different cancellation API, which would have to work well with the `AbortController` and `AbortSignal`. It would be pretty rare and unfortunate to define multiple APIs for the same thing. I think the name chosen for `AbortController` is particularly unfortunate.
+
+DE: For next steps, I want to work together with other people, including RBN and others, on investigating these problems and explaining them in more forms. Working with ecosystem libraries such as Ron’s library cancel token API, and server-side frameworks, and see what we can do to work between TC39 and WHATWG on cancellation. If you want to get involved, please get in touch with RBN or me. This is a very early effort.
+
+RBN: Some concerns with the idea of ambient `AbortSignal`. I’m providing some of that feedback in chat. It is something we discussed on-and-off in various meetings. I think, the last time this may have been discussed was around 2017. But again, if you want to talk more about this and my concerns you can probably talk to me offline or on Matrix. There are a lot of different bits and pieces in this to go into that are more what we have In the short discussion today.
+
+DE: Yeah, I wanted to bring up the discussion at the early stage because I’m also curious whether people see this as an actual problem. Maybe I’m making a big point out of nothing in particular.
+
+NRO: Yeah, I’m happy to say, this is being explored again. Personally, as a developer, I have been using `AbortController` quite frequently since I discovered it exists. I never had problems with it being there, because it is present in both Node.js and the browser. But it still feels like the context where I use it are not really anything Node-specific or browser specific. So it feels weird to have to go to platform APIs to separate from the language itself.
+
+DE: Anybody used abort control other than Nic? How has it worked out for you?
+
+CHU: It’s especially useful for business. Because, to, to remove them. Because the other way, to remove them it is awkward, like they have to have t
+
+LCA: One thing we use them for is our HTTP server, takes a request object that passes in a signal that gets aborted when the user aborts the request, which happens very frequently. Users can pass the `AbortSignal` to downstream requests they make, for example, get aborted. We see actually quite a lot of adoption of this. But it requires support from libraries. If they have a library that makes the call under the hood and not called through the signal, they have no way to abort this. There has to be an upstream API to wire this through. I don’t think we have a good solution for this. I agree with RBN, the signal needs to be an opt-in, if it is inherited. If there is a way not to record that, that would also be very nice. It would mean that users would not have to think about cancellations written libraries and someone that needs to make cancellations could make it work without upstream changes.
+
+KG: Yes, I have used `AbortController`. Yes it is very useful. To answer Dan’s question.
+
+PFC: I haven’t used `AbortSignal` and `AbortController` on the web platform so much yet. But I have used the equivalent on other platforms where you have to thread an object through all of the async operations. And for me, it makes a lot of difference whether you come from a position of wanting everything to be cancellable by default or want it not cancellable. There is no need for something to be cancellable if there is no UI cancel button for the user to press. But on the other hand, maybe your goal is to get people to design more UIs with more cancel buttons. I was curious whether you want this API to take a position on that?
+
+DE: Well, it has to take a position one way or the other. The current position is not threading through. I thought it might be useful to do the automated threading through, but there are these costs.
+
+RBN: And add onto that, my concern with automatically threading things through is that, for everybody that wants, that looks at a package and says this package wasn’t written to by cancelable, therefore, I have to commit an upstream PR to make sure it can be. There are passages that have aPIs that except to complete in the best case scenario, like say a three-phase commit transaction in a distributed transaction system, you want to make sure that I’m going to submit the request and go through barring a worst case network interruption, you don’t want someone navigating away to break that in an SPI, where the actual, like, code is still running and still resident in memory. There is no reason it should stop, but suddenly that code taking—or supporting cancellation even though it wasn’t written to handle it is problematic because it could break an existing assumption about how code runs, now I have to write defensive code around my code that I run without any type of cancellation, I’m back to the same thing we have to do today for saving off primitives and intrinsics at the start and not making sure that someone is poly filling or hijacking my code, now I have to do it for every API. So I have to write defensive code around something that should be ideally passed in, because that is the appropriate separation of concerns.
+
+DE: Yes, that’s why in the slides I’m suggesting explicit { signal: “inherit” }.
+
+### Summary / Conclusion
+
+- DE raised several issues with `AbortController`/`AbortSignal` which affect its reliability and usability.
+- Many TC39 delegates have found `AbortController`/`AbortSignal` to be a useful API.
+- Some interest in ensuring that there is an API for this functionality “in JavaScript”.
+- RBN explained that threading through `AbortSignal` implicitly, via AsyncContext, would cause unexpected effects on existing code, possibly affecting their soundness. This may be mitigated by explicit opt-in (e.g., { signal: “inherit” }).
+- LCO, PFC saw both sides to the threading question (about whether the `AbortSignal` should be threaded through via AsyncContext and used by default).
+- Unclear whether committee members have run into any of the other issues described aside from implicit/explicit threading; no discussion about this.
+- DE and RBN to work with WHATWG and TC39 on these issues and invite others to join them.
+
+## Joint Iteration for Stage 2.7
+
+Presenter: Michael Ficarra (MF)
+
+- [proposal](https://github.com/tc39/proposal-joint-iteration)
+- [slides](https://docs.google.com/presentation/d/1Qj5h6MajJnji1obZsXea_cUgfwxur-yT6v-8rBTLqtg/)
+
+MF: This is joint iteration for stage 2.7. So as of when it was last presented, there are two methods being introduced. `zipToArrays` and `zipToObjects`. `zipToArrays` takes an iterable of iterables that is self-explanatory. And `zipToObjects` takes named iterables, that is an object whose own properties are iterables. `zipToArrays` produces tuples that align with the input iterables. And
+`zipToObjects` produces records whose fields align with the names of the input iterables. Each of these methods also takes an options bag as a second optional parameter. There's a longest option, which changes the behavior from the default of shortest—which means stop iterating once any iterator that you’ve been passed has completed—to longest, which means we can continue until all of the iterators you have been passed have completed. There is also a third mode of operation that can be enabled with the strict option. If all of the iterators don’t yield at the same time, after the same number of elements have been yielded, we will throw a type error. And the final option that can be passed is padding. This is to be used with the longest option as the values that are used to, you know, used in place of what would have been yielded by already exhausted iterators.
+
+MF: Hopefully, that explanation was clear. I felt like I was a little rambly. So there are three open questions at the moment, which I think we can resolve in real time and still achieve stage 2.7 today. The first open question is the names. We’ve gone back and forth on this a little bit. I originally proposed that these two methods were actually the same method that distinguished based on the type of the first value. But committee generally leaned towards splitting them. So we split them into these two methods, `zipToArrays` and `zipToObjects`. That’s what they are currently called now in the proposal repo. Some floated the idea of maybe calling `zip` and `zipToObjects`. Personally, I would lean towards keeping things the way they are, `zipToArrays` and `zipToObjects`. The reason being, I feel pretty strongly that when using these zipping methods, once you get past, you know, two iterators, that it is best to start naming them so that you can better align the use site with this zipping site. But I guess I’m open to change it, if the committee is overwhelmingly in the other direction.
+
+MF: The second open question, how do we treat strings? Historically strings are iterable. But with iterator helpers we have diverged from that. We can actually test when an iterator passed is a string and not iterate it. So this pull request, the open pull request number 25 does that. I can go either way on this. I can see good arguments in either direction for consistency, of course.
+
+MF: And then the third and final open issue is whether we pass longest and strict options, these are the options that control the behavior around when the iterators yield a different number of values. Whether we pass those two options or instead pass a single option that unifies them into this three-state option. There are pros and cons here. Obviously, we want to make illegal states unrepresentable. The two Booleans make four states or three legal states. That means there is an illegal state where longest and strict are both true. The alternative is passing a mode that is the union of these three strings. The downside of that is like, technically now, we have an infinite number of possible values if we just say this is typed as a string. You know, you can mistype shortest or whatever. Maybe we could just make these constants on the Iterator object or something, like they do with DOM node types. But you know, that doesn’t make anything better, that just adds a layer of indirection. I’m not really sure. I do want to solve this problem, but I don’t think we have good solutions. We just have these two. I think, either way we go would be fine, though. And it’s a very straightforward change if we do move from one to the other in the spec text. So, I also don’t think that would prevent us from going to 2.7 today.
+
+MF: I wanted to notify the committee of this decision that was made. This was a change from the last time this was presented. Previously, when you passed no iterators to these methods they would yield an infinite iterator of empty values, which, depending on your mental model of how this operates, it is consistent with the cases where you do pass iterators. It just depends on the mental model you have. But we changed it to instead never yield any values. So, they return already exhausted iterators. And that is more consistent with the vast majority of other languages and libraries. So I felt this was a pretty clear-cut decision to align with them.
+
+MF: It was brought up in a couple of our meetings discussing this proposal by JHD that we should have array specific variants of these. As a reminder, these do work on arrays, arrays are iterable. But array-specific ones would work through different means, through array indexing instead of iteration. That may have pros and cons. And there also may be pros and cons with the clarity of the code produced if you’re referring to arrays when operating on arrays. But, at the last meeting, I agreed that if it was completely uncontroversial within committee that I would include it and we could send both of these through together, but on the thread that I directed people to, we were unable to resolve those differences and it’s clear that it is still controversial. It may still advance in its own proposal, but it will have to be independently justified.
+
+MF: So otherwise, this proposal has not changed since stage two. I have full spec text. I have a polyfill in the repo. A demo on the repo’s website. I have basic tests. Not test 262 level, but all of the tests for the expected functionality. I received reviewer sign-off, from the reviewers jHD and NRO. I would like to ask for stage 2.7 once we have resolved the three questions that I have run through. I’m ready to go to the queue.
+
+RPR: Okay. Ashley is on the queue.
+
+ACE: Hi. Yeah. So when we talked about this in Bloomberg during our review, we had a preference for just zip, I think that’s, so—much precedent in utilities and other languages, I can see being explicit has advantages, but the preference, the shortness of zip for the common case. We would like, if the group of you feel the same, adding our voice to that group. Move onto my next thing. We will set a preference for the string enum rather than the kind of Boolean session.
+
+MF: For the mode?
+
+ACE: Yeah, for mode. Yeah.
+
+RPR: And—obviously, you already have the items, Ashley. So John?
+
+JKP: Sorry, I don’t think I saw Ashley was typing as well. I was voicing a preference for the mode with the multiple strings over the other. Thank you.
+
+RPR: RGN?
+
+RGN: Yeah. I think the, the string valued mode, it already has precedence in 402 as we’ll hear about later there. Are also in plans in Temporal as well. I don’t think we should have any reservations about extending that pattern.
+
+JHD: Yeah. I guess, I’m confused that it’s controversial. I mean, for the array-specific variant, it is certainly fine if the proposal proceeds without it. But the semantics of an array-specific proposal or basically set by this one, so I’m not sure other than, than having separate conversations about independent justification, I’m not sure what else there would be to discuss. It’s clear from previous plenaries that it would not be an array-type method. So it would basically be `Array.something`, and something is whatever the method names are in this proposal.
+
+JHD: So, if we’re fine with this, I guess, yeah, I’m not sure I see the benefit of splitting it up. And if, if I were to make a new proposal, it seems like it would almost immediately go, you know, to a somewhat advanced stage given there’s virtually no design space in the wake of this proposal.
+
+RPR: All right. This is SYG's prepared statement. I’m assuming he is not on the call right now. It says:
+
+> V8 has the following concerns for Stage 2.7, but **not blocking** if they enjoy committee consensus otherwise:
+>
+> - Dislike the current method names
+> - Dislike using two boolean options for mutually exclusive options instead of one option with string constants
+> - Supports omission of Array method because `toArray()` exists
+>
+> See [this issue](https://github.com/tc39/proposal-joint-iteration/issues/18#issuecomment-2151111702) for details.
+
+JHD: The link to the issue doesn’t discuss the array thing at all, but it does apply there is a comment in a different issue, which I’m trying to look at right now. The, the last comment is essentially, from what I’m inferring is that the last statement of SYG’s comment on the issue number one on this repo is that toArray is not that inconvenient. So in other words, the general arguments I have heard is that using the iterator form and then using toArray is acceptable for some and unergonomic for others. I haven’t heard of a lot of strong arguments that I recall against the array variant beyond that. So it would—you know, be helpful, I think, if someone, if anyone who had any, like if you have an argument that we haven’t, the arguments we already heard are you can already do it because arrays are iterable. And it is fine to do two array on the iterator variant. If someone has an additional argument, I would love it if they can throw it on the queue. At this point I’ll be done speaking. Thank you.
+
+Mathieu has a reply.
+
+MAG: Yeah, there we go. Just I kind of don’t want to try to do the array prototype thing every time we try to land something on the array prototype.
+
+JHD: I was saying `Array.zip` a static method because of the prototype methods.
+
+MAG: I heard you say prototype and I thought you meant prototype method. Withdrawn.
+
+JHD: I was echoing the dislike you’re describing, because it’s been shared in previous plenaries, so I’m assuming there would have been really compelling reasons to did an appropriate type method, and in this case, a static method is perfectly fine.
+
+LCA: Yeah, I have a weak preference for zip and zip to object, but don’t really go either way.
+
+MAG: Yeah, just I kind of don’t want to do the array proton type thing because every time
+
+LCA: It is not array prototype.
+
+JHD: I was saying `array.zip` at this time. The array problem. I was ACK laying the dislike that was described so I was assuming there would be really compelling way to do array prototype method but in this case static array method is totally fine.
+
+Okay good. LCA?
+
+LCA: I have a weak preference to array for the thing of whether we should—what to do, but string case. I think there’s—I think as a committee we should make a decision on this and apply this generally, we have talked about whether we want to coercing things or asserting a couple of things a few meetings ago and general conclusion that we will not curse things and detour of what is wrong and we have a similar discussion a couple of weeks ago and the conclusion for now there is also that we do not want to sort of do anything—we do not want to implicitly iterate strings and adone done. So I have a preference to not implicitly iterate strings and I think as a committee, we should make a decision whether we want to apply this to future `things.thing` or we do not want to inclusive iterate strings and rather inclusive abdomen, so we have a preference not to iterate things or whether or not we will apply this to future things too and finally on the string case I think we should use the string and using them there is many ways that can use string in fetch and many other one too and think there is—yeah people are familiar with this and it is easy to use and it removes this case of specifying two arguments.
+
+MM: Okay, so I am just concerned that to make sure that the iterator helpers and the iterator helpers are coordinated that people have minimal surprise going to the other and not a detailed answer but what is expectation that parallelism between like proposal of iterators in what you expect to show up for these writers?
+
+MF: That's a good question. For the things that are in the stage 3 iterator helpers proposal, we have a very good idea of what their counterparts in async iterator helpers look like. For some of these follow-on proposals that I have done since, I haven't really tried to make an assumption about how they may appear in async iterator helpers. And I don't know how helpful it would be to do that without actually going the full route of pursuing a proposal because, as we've seen when we started working on concurrency in async iterator helpers, and as we've now started to see when we've started working on the unordered space, concurrent task management, these solutions might look a lot different than we initially expected them to until we did a lot of work. Maybe it'd still be worthwhile doing some amount of work to try to get an idea, but it's not going to be very reliable.
+
+MM: So that is easy for me to volunteer to do work but I will say that really I think that some moderate amount of work to do a check to see if expected similar API’s seem reasonable for this iterator helpers and enough to at least spot any problems because if we do something that accidentally makes things more different than the corresponding thing simile iterator helpers that would be a shame.
+
+MF: I agree, and I think that it is worthwhile to spend a bit of time at least thinking about it.
+
+MM: Okay, thank you. Um, and if you would like to do some separate brainstorming on that, that would be fine. Thanks.
+
+MF: Can you repeat that, I did not hear that?
+
+MM: It sounds like it might be something that you know some interactively brainstorming and it seems like it is not very much examined yet. So talking through the issues through and volunteering to the spring board.
+
+MF: I am still having a bit of trouble actually hearing but I think you are offering to work together on that?
+
+MM: Just to you know, essentially yeah. Just not a lot, but yeah enough to do initial exploration. Yes.
+
+MF: That is exactly as much as I would like to as well.
+
+LCA Yeah, I just wanted to say, I think we should do this analysis, and I hope that we can just point this over to once we figured out concurrency for iterators but yeah I don’t think there is anything blocking us but we should definitely take a look.
+
+RBN: I will say this for message one and I have a preference for zip mostly because that is what it is called in every other library that you look at and especially what developers are familiar with so looking at zip 2 array and adding the extra thing for what is most likely going to be the most common case seems like unnecessary overhead for users. So I don’t think we need to array part of it, that is fine. And for the other topic, discouraging implicit array position and I held that the implicit iterator on the string prototype has been a mistake. I think string iteration is great and I think we could have easily had string prototype values and be done with it. Mostly because it is a poor developer experience for most cases. Because JavaScript is untyped, and not type checked and we don’t have an individual thing that represents a character in a string. It is very easy to it iterate over a string thing you don’t reaped that you are iterating over a string and not something else and it is easy to get the wrong thing in many cases. And if we already have this explicit iterator that can you get by calling `string.values` to get the actual characters manually and you can iterate over and that is generally what we suggested iterator helpers API and we worked with.
+[ WRITER SWITCHOVER ] object iterators and that is why we have had to add things like symbol into an array. And we recognize this is something to avoid and continuing to avoid it is a good thing and we should discourage API and it is they made a mistake in their code and not what they are expecting and if that is something they want and they manually reach for it and say this is something I want to do.
+
+KG: This is on the first topic. I am fine with the names zip and zipToObjects but I want to point out that encouraging zip the array form is the more convenient thing is a little bit unfortunate because the natural way of using it, is to destructure the results with iterable destructuring with the structure for destructuring whereas the natural way of using zipToObjects is to do the sort of braced object destruction thing for the thing the zip iterator produces which does not invoke the iterator protocol. It's unfortunate to invite use of the iteration protocol when it is not necessary and it is unfortunate if we agree to give zipToArray the good name it will encourage that overhead. With that being said, I do degree that zipToArray is known as zip any other language, but it is unfortunate that we are giving the more convenient name to the less performance thing.
+
+MF: And it's not just about performance, there's also readability associated with it, which I mentioned earlier. Once you have more than a couple of them, it's really hard to align visually. You have to count, this is the fifth iterator that I've passed, so it's going to be the fifth thing that I destructure. It's just really, really hard to do as a human.
+
+RBN: I would say most of my use cases of zip anywhere else has never been to destructure the results. They have been to merge two things and work with the results usually by continuing to do more things to the query, almost never have a case of destructuring resulted zip.
+
+KG: Well but a common thing is that you zip two things, and then you map over the result. And what you do with the mapper function is, you take the two items out of each result and the way you take those two items is by destructuring.
+
+RBN: I see, yes.
+
+MF: So in my experience, that's almost every single time I use it.
+
+CDA: SFC?
+
+SFC: Yeah, I was on the queue earlier and plus one of string and Boolean are bad with one and we had examples where we had options and we decided to change Boolean with Boolean plus string because it is boolean is not expressing and this happens over and over again and basically they are like every API is like it is clearly a Boolean but not a Boolean, but stringing them is definitely the way to go. Duncan?
+
+DMM: Oh so, sorry. I definitely don’t like introducing more implicit string iteration because it is what we use as an API and it is normally confusing. We have a string proton type symbol iterator that I think this is an area where we could improve the language because when you do want to explicitly it rate over a string by characters or something like that, that does not give you the things that you want. Because your code is now too complicated for that to be the thing to do. So I think in what we should think about on the string iteration is swift API for iterator a string that will give you the actual principal characters which what user of API will get a hold of and it is a tricky thing to write. And get correct, and it is something that we should probably consider putting in language.
+
+CDA: That is it for the queue. Is there a reply for Dan?
+
+DE: As—oh we have graphing iteration in the language and we have Intel segmenter where you can pass the graphing flag and it will iterate through graphings. And any way, I agree with others that not iterating through strings sounds good and I support stage 2.7.
+
+MF: Well, um okay I would like to just review the decision points here. It sounds like on naming what I have heard is that we should use zip instead of zipToArrays. It is not my preference but we will make that call. We are all in agreement that we should not iterate strings and use that to set a precedent for further API’s that accept iterators. As far as the mode specification, the overwhelming majority opinion, if not unanimous, is that we pass a string for the mode option and not 2 Booleans. With that being said, MM did ask for us to spend some time looking into how this would align with a possible async iterators variant, and I am not sure if we should be asking stage 2.7 for this. I don’t know if KG was on the call when Mark asked for that and if he has thought about this since KG has been working on async iterator helpers.
+
+KG: I have, and I have thought about it. And I am pretty sure it works completely naturally. You like each call to dot next on the result of the iterator zip just as one call to dot next on each of the underlying things and bundles up the results. And you don’t like wait for the earlier things to settle. It is just completely the obvious thing. I will not promise that is how it is, but I don’t think any of the—I don’t think I see a way that the design of synchronous iterators could possibly need to change in order to accommodate the sync iterators.
+
+MF: How would you think of the risk here moving forward with Stage 2.7 with only the amount of thinking that we have done so far?
+
+KG: I am not worried about it.
+
+MM: So yeah let’s check. The fact that KG has thought about this and has that fairly confident impression is what I am looking for. I just need a sanity check, and it sounds like Kevin has already on that. I would like to do more exploration, but at this point, it’s more of a just in case something is overlooked which is lower probability. So, yeah, I’m happy to go forward with 2.7 now and then we just double-check it in case there’s a problem.
+
+MF: Okay, if you’re happy with it, then I’m happy asking for 2.7.
+
+CDA: Okay, we are just about out of time. JHD, do you want to be brief? I see you’re on the queue.
+
+JHD: Yeah, just so with those three changes MF just discussed, I do support 2.7. But just to be very clear, my intentions are to come back at the next plenary. I was going to do it this plenary but I didn’t because I thought it would still be included, but I’m coming back with Array.zip and Array.zipToObject proposal. So my intention to ask for some form of rapid stage advancement depending on if I discover any issues or not. Because it will just be cribbing the semantics of this proposal. So if somebody has a new argument against it besides duplication and, you know, ergonomics, I would love to hear it between now and then. Thanks.
+
+CDA: All right. So you have support for 2.7 from JHD, from MM, from DE. Are there any other voices of support for 2.7?
+
+MF: We have LCA raising his hand in the room.
+
+DE: I think thumbs up from Chip.
+
+CDA: We got a +1 from Daniel Minor. All right. You have 2.7. Would you like—I know you kind of summarized a little bit some of the issues a few moments ago, but do you want to provide a summary and conclusion for the notes?
+
+### Summary / Conclusion
+
+MF: I’ll do it real quick. On the naming issue, we choose to rename zipToArrays to zip. On the string iteration issue, we chose to not iterate strings that were given as input. And on the mode selection issue, we chose to replace the two Booleans with a single mode option, which is one of three strings: "shortest", "longest", or "strict". And that’s it, advance to Stage 2.7.
+
+## Temporal Stage 3 update and scope reduction
+
+Presenter: PFC, JGT
+
+- [proposal](https://github.com/tc39/proposal-temporal)
+- [slides](https://docs.google.com/presentation/d/1PPMAxVnVjFwRPuJwOvVsw9nZLQ6jDM8Hd5PNO0Grp4I/)
+
+SYG's prepared statement:
+
+> V8 strongly supports this scope reduction and thanks the champions for being so open to late-stage simplification.
+>
+> V8 will take all the reduction it can get: the smaller the proposal, the higher the likelihood it can be implemented and shipped. That said, we don't feel super strongly about any particular method's reduction. For example, we see where developers are coming from for wanting `subtract()`: it is a ubiquitous convenience, even though subtraction everywhere else also means addition of the negation.
+
+PFC: For the rest of the afternoon, we are going to be talking about Temporal. Hopefully we can finish a bit earlier than 90 minutes, but we have a lot to discuss. My name is Philip Chimento and I work for Igalia and I’m doing this work in partnership with Bloomberg, and one of my co-champions, Justin Grant, is going to be presenting some parts of this as well. First a short progress update. I don’t remember if we announced this in the last plenary or not, but the long awaited standardization of the ISO string format with timezone and calendar annotations that we’ve all been waiting for is now an official RFC. It’s RFC-9557. A big thanks to the USA, among other people, for keeping that moving through the long and not entirely pleasant process. We’ve got some active implementations going on in SpiderMonkey, which is being done by ABL, I believe. There’s an implementation going on in Boa, which is being done by a few people, including JWS over here in the room. There’s a polyfill implementation being pursued by Adam Shaw as part of the Fullcalendar organization. And all these have been incredibly helpful for finding both editorial issues in the spec as well as some of the bug fixes that we’ll be discussing today. So one part of this presentation is going to be presenting two minor normative PRs to address issues that have been reported by implementers. One is an arithmetic corner case bug and the other is a non-ASCII character this the ISO 8601 day string grammar. And then the bulk of the presentation will be talking about how we are responding to feedback from implementations.
+
+JGT: And, boy, have we got some feedback. So, you know, so we just sort of screenshotted the purpose of Stage 3 here, which is to understand when people are going to build production grade implementations to, you know, help expose issues and to get feedback from implementers. So what we’ve really been absorbing over, I guess, the last six months or so has been some very strong feedback. I would say from all, you know—from Google, from Apple, from Mozilla, and others. That Temporal is too big. And it’s interesting in that often times in the committee, we focus on sort of what is the right thing to do for the language. And we have to convince each other that these things are the right things, and in this case, the pressure is really coming from outside, right? It’s coming from not necessarily people in ECMAScript, but people who are responsible for Apple Watch or for Android, who are concerned about the additional added size that a proposal the size of Temporal will provide. There are some engine-specific things, so, for instance, the number of functions turns out to be particularly expensive for V8 because of the way V8 stores each individual function. And, you know, obviously we’re all using laptops that have, you know, many gigabytes of storage, but if you’re talking about a low-end android device, even something like 5K or 10K can be a significant issue for engines. And so in addition to size, just complexity itself in terms of the difficulty of implementing for implementers, and two things that have sort of bubbled to the top here is custom calendars and timezones in Temporal involve calling back into user code. We’ve heard a number of concerns from implementers. I think Dan, you’ve—on the Mozilla side, you guys have definitely raised that. We’ve heard similar concerns from Google. And so today we’re going to go through and try to propose some solutions to these concerns.
+
+PFC: That’s going to be the bulk of the presentation. But first I’d like to go through the normative issues and discuss those first. Get that out of the way. So the first one is a bug that Adam Shaw, the polyfill author, found, in an edge case when you take the difference between two plain date times, near the end of the month, when one has an earlier date and later time of the day and one has a later date and earlier time of day. You can be off by two days because of a sign pointing in the wrong direction. Thanks to Adam we have a fix for this and also improved test coverage as part of fixing this bug. Here you can see a code sample of the exact input that would trigger this bug and how it is not correct to count one month from February 28th to April 1st. It’s one month and three days. So this is the case where we just gave the wrong answer and we need to correct that.
+
+PFC: The other pull request that we’d like to discuss today is dropping support for parsing the Unicode minus sign in iso 8601 strings. This is a suggestion from Boa. The proposal contains a grammar of the entire ISO 8601 string format, plus RFC3339 and RFC9557 now, and the exact variations that are permissible by those standards, which of those variations we support and which of those we don’t. ISO allows minus signs in these strings to be Unicode minus signs rather than ASCII hyphens. This is not something that the RFCs allow, so it’s kind of ambiguous and probably not used very much in the wild. It complicates writing parser code that could otherwise operate only on single byte ASCII strings. Now, one place where minus signs might crop up is in the names of UTC offset time zones, which since not so long ago, are supported in ECMA 402 as legal time zones for Intl.DateTimeFormat. So this is a normative change that applies outside of Temporal as well. There is a PR for ECMA-262 changing the grammar for offset timezone names. You can see in this code sample here that there is an observable difference in the code, even for engines that don’t yet implement Temporal. So, yeah, before we discuss the removals, I’d like to ask if there are questions and call for consensus on these two normative changes. I’ll give a moment for people to add themselves to the queue, if needed.
+
+RGN: I’m specifically hoping we have comments from one or more implementers on the second one, because it is already widespread that we’ve got support for this, so if we are going to change it, they’re affected more than they otherwise would be.
+
+DE: So I thought that this was, for the second one, borderline whether it was normative change or not, mainly because dates have that otherwise secret grammar that each engine can make up for themselves, and presumably, this would be inside of that, just not the Temporal one.
+
+PFC: Do you mean the secret grammar of `Date.parse`?
+
+RGN: Yeah, but that is not the case. It is unequivocally a normative change.
+
+PFC: But you could still support it in this secret grammar of `Date.parse` if you wanted.
+
+DE: Right. Right. I mean, sure, it’s a normative change. Fine. But it doesn’t actually require that any engine make any change if their implementation. Just reduces the number of guarantees that a programmer has?
+
+PFC: It does. This code snippet here would have to produce a different result, normatively.
+
+JGT: And to provide some context, right, this is a very recent addition to the grammar. And it is almost certainly not used widely. Both because it’s an obscure Unicode character you’d have to get to, and second is that this format, this—for offset time zones, really is only around for compatibility with Java's ZonedDateTime, and because EMCAScript does not have Temporal yet, the usage of this particular format of offset timezones, as opposed to name timezones, is going to be very unusual. So even though this is, in theory, a normative change that will have observable effects, in practice, I find it incredibly hard to imagine that this would break anybody.
+
+PFC: The only use I could think of is, like, copy/pasting a UTC offset from a Word document where Word replaced the dash with a minus sign automatically or something like that. I don’t even know if recent versions of Word still do that.
+
+JGT: In the interest of time, because we have got a lot to cover, could we ask if there are any objections, and if not, can we get consensus for this.
+
+CDA: We do have a couple folks in the queue. Daniel Minor support for normative changes.
+
+SFC: The offset formatting example that you gave is relatively new. It’s very new. I remember we just recently proposed that. And, so, this would break existing behavior, but only this very, very new existing behavior, which is I think an important note that should be noted here. Yeah. Thank you, by for highlighting that is technically reachable without Temporal.
+
+PFC: That’s thanks to RGN. I didn’t realize it myself either. Should we call for consensus on these two normative changes?
+
+JGT: That’s actually a great transition to our next slide, so do we have—I think we have consensus for those two changes so we can move on? And can we --
+
+CDA: Any objections?
+
+CDA: Seeing nothing in the queue, please continue.
+
+JGT: Okay, great. So before we get into actually what we want to—how we want to reduce the scope of Temporal, I think it’s helpful to set some context for everyone first, and this is really what we’ve learned over the last six months or so about the process of getting new EMCAScript features and actually shipping them, especially in browsers, right? So if you put yourself in the shoes of someone who— who is looking across a browser, right, ECMAScript is just one of many components that they have to deal with, and these browsers, they need to ship in resource constrained environments like the Apple Watch or a low-end Android device, they need to be really concerned about the size on disk, and especially if you’re in storage constrained device, they need to be concerned about download size, right, and growing that over time. They need to be worried about run time, RAM consumption of the code itself, and we’ve learned, you know, things like if you have a website with a lot of ads on it and each ad is in an iframe and each iframe has a copy of the ECMAScript built-ins, these things sort of add up a lot, even if they’re not necessarily related to the design of the language. And that if you imagine one of these folks, EMCAScript is one of, you know, there’s DOM, there’s CSS, there’s the video APIs, the sound APIs, there’s hundreds of components that these folks need to wrangle and think about, so even if we are adding, you know, a relatively small amount to the size, it’s not like they would think of it that way. But rather, the right way, if you’re building one of these browsers, is essentially to put everybody on a budget and to say that, okay, of you hundreds of components, nobody can double their size in a year without approval from the CEO or however that works, right? So there’s both a technical challenge and a sort of human process challenge here. The good news is that time heals this, right? So every year, every release of the Apple Watch will have more storage. Every year, new Android devices are coming online at the low end that have a lot more capability than the previous years and the old devices are being recycled. And so what that means is we can add things that we’re proposing to take out, but we just can’t add them all at once. So a good way to think about what we’re discussing today is that we’re deferring things, not permanently cutting them. And in reality, right, some of those things might not come back. There might not be enough community demand or champion interest in bringing them back, and that itself is a signal, and so—and finally, over the long term, engines will optimize, right? So we mentioned before that V8 has a problem with function count. Well, as, you know, Temporal or no Temporal, we’re increasing the number of functions in EMCAScript, right? And at some point, it will become feasible or sensible for V8 to optimize that problem somewhat, right? But we can’t hold all our proposals and, you know, that might be years from now, so we don’t think it’s a good idea to hold Temporal for every engine to redesign themselves. And finally, so my day job is I help to build enterprise software, and there’s a—in all large companies, they have a—what’s called a procurement office. And their job is to negotiate with suppliers and essentially extract discounts from those suppliers. And that the people that work in procurement, literally their performance reviews are based on how much money can they squeeze out of various suppliers. And you could think of the people deciding which components go into browsers as kind of like these procurement officers, right? Their job is to ensure that they can squeeze as much out of or at least limit the growth of all of the components that are in their purview, and you can think on the other side of that is someone like SYG, right, who’s, you know, we have champions and proposals. Well, SYG is the ECMAScript champion inside of Google trying to lobby the rest of Google to be able to say that ECMAScript should be able to grow. So what we have designed this process of taking stuff out as a way to give those EMCAScript champions like SYG the ammunition to go back to their browser procurement officers and say, you know what, we’ve done all that we can, we have really squeezed these guys, make sure they can reduce as much as possible.
+
+JGT: So next slide. Great. And so the process we went through is we literally went through every single Temporal function and every argument and essentially tried to understand what would be the impact if we took them out, right? And in some cases, and we have, you know, issues are all in the repo. They’re all split out. We got lots of helpful suggestions, so, like Frank from Google was very helpful in suggesting some ideas that we hadn’t thought of. And we’ve had lots of community feedback, so the proposals that Philip is going to go through in a little bit really represent the last several months of a very focused effort from the entire champions team and many of you here and many others in the community to try to come to what could we take out that causes the least harm to the success of the proposal. So our goals in doing this, we wanted to address, as I mentioned, implementer concerns around size and complexity. Ideally, if we were going to hurt things, we wanted to hurt things that were uncommon, right? That were advanced use cases were less commonly used. What we really didn’t want to do was to make the API harder to learn for the vast majority of developers out there. One thing we also didn’t want to do is redesign anything. This proposal has been in the works for like seven years. We don’t want it to be another seven years. So, you know, our goal is either remove it or leave it in, not to crack it open again.
+
+JGT: We also wanted to make sure that we didn’t make any future incompatible changes, right? So everything that we’re proposing today are things that if there’s community demand, we can put back. So that was very important to us. And in some cases, the work of going back and really closely examining every function, some of which we hadn’t really touched in three or four years, and we got some good ideas from the community as well, what better solutions could be. A good example is in the intervening time when designed the timezone API and now, there are now open standards that define declaratively what a timezone API could be, and that might remove completely the need, in a future proposal, to call out into user code. So next slide. So I’m going to hand it over Philip here. Hopefully this is 90 minutes and we’re going to try to keep you interested by going back and forth. And Philip is doing the hard work with going through all the changes.
+
+PFC: You may be familiar with the old saying, tell ‘em what you’re gonna tell ‘em, then tell ‘em, then tell ‘em what you told ‘em. This is the part where I’m telling you what I’m gonna tell you. Here is an overview of the things that we want to remove. We’re going to talk about removing custom calendars and custom timezones. We’re going to talk about collapsing the implementations of the valueOf and toJSON methods into shared function objects. We’re going to talk about removing a bunch of functions that are mostly ergonomic APIs but have easy workarounds if they’re not present. And we’re going to talk about removing the relativeTo option from Duration addition. And so what this gains us is a net removal of 96 functions, which is just under one-third of the proposal. So there were about 300 total functions before. And there will be about 200 after. Here is the numbers of each—like, how many functions each topic removes. If you want to calculate it yourself.
+
+PFC: So that’s a lot of things to discuss. We think that some of these, people are going to want to discuss, and some of them people are not going to feel so much like they have to discuss it. So we want to give everybody the opportunity to discuss where they feel it’s needed, but also move along quickly if nobody has any comments or questions. So the way we cooked up to do this is we’ll give each topic on each slide a letter. It will be in a big circle like this one. So if you want to discuss the A removal, then just put yourself on the queue with A. And then when we get to the end, we’ll see which letters are on the queue and those are the topics that we’ll talk about. And all the other stuff, we’ll just skip. We have at the end of this, a slide with some proposed time boxes for each topic, and depending on how many topics we need to discuss, we might adjust those time boxes.
+
+PFC: Okay. The first one is calendars. The current state of the proposal is that there are built-in calendars. These are the ones that are supported natively by the implementation, and these are the same calendars that you can find in CLDR and ICU. The built-in calendars are represented as strings when you make API calls that take a calendar. There’s also a Temporal.Calendar object, which is primarily present so that you can make your own custom calendars by writing a class that extends that class. And then each type that carries a calendar has a getCalendar method where you can get the Temporal.Calendar object. So we would like to remove this. Not only the ability to define the custom calendar, but also the Temporal.Calendar class, and we will just use the built-in calendars, and those will be represented as string identifiers. So there are use cases for custom calendars. In the course of examining this removal, we also figured that it may be possible to add custom calendars back in a later proposal with less cost for implementations and less complexity. Frank had one nice idea that we’d like to explore in the future, but, you know, as Justin said, we are removing, not redesigning, so, you know, if we want—if there’s demand for custom calendars, it might make sense to add them back and then redesign them. But that is not something that we want to do right now. One big sort of deficiency of the custom calendars that we had is that they don’t integrate with the internationalization APIs. The Intl.DateTimeFormat only takes the built-in calendars and you’ll get an exception if you try to format a date with your custom calendar. We also, maybe if we consider adding this back in the future, like, to discuss how it would be possible to integrate custom calendars with date time format. Right, so what does this mean for users? We’d like to defer custom calendars until a later proposal. You can still implement a calendar that’s not in CLDR by making your own object that either composes with Temporal.PlainDate or extends Temporal.PlainDate. And the main thing that you’ll be missing is the ability to polyfill custom calendars as if they were built in. We now don’t recommend this, but if you needed to do it, you would have to monkey patch. On all of these slides, I’ve got links to the commits that show the exact normative change in the spec. There’s a link on each slide that you can click. For calendars and timezones, these removals are done in the same commit, so the links to these four commits, they apply to the next slide as well.
+
+PFC: What we propose to do with calendars we’re also proposing to do with time zones. It was possible to create a custom time zone, and there is also a Temporal.TimeZone class that you could extend in order to do that. The—there were also a couple of advanced features on Temporal.TimeZone that you might have wanted to use, like looking up the next UTC offset transition. We’ll keep this functionality, but just move it to a different place in the API on ZonedDateTime. So just as with calendars, we actually think there’s a better design possible in a future proposal. Something that didn’t exist when we first designed this was jsCalendar and jCal. There’s actually a JSON format for declarative custom timezone API instead of the design that we have here, where you call into user code, and it didn’t exist when we came up with this API, but it would make sense if we were bringing this feature back to use something like that where we wouldn’t have this reentrancy problem, and the timezone would be completely defined in a declarative way rather than by implementing methods. So all of the methods that existed on Temporal.TimeZone, you can still get that functionality even without the timezone class, even if it’s less ergonomic in a couple of cases. Except getNextTransition and getPreviousTransition. So as I mentioned, those will move to ZonedDateTime and consolidate to one method. We chose to consolidate them in order to sort of squeeze out a savings of one extra function there. So you can see in this code sample, instead of getNextTransition, you call directly on the ZoneDateTime getTimeZoneTransition and pass a direction of ‘next’. This is actually slightly shorter than what the old API was. So maybe that’s a win. Another reason we choose remove these now, we thought we had to add them at the beginning we wouldn’t be able to add them later, because originally the time zone and calendar protocol called user methods even for built-in calendars and timezones. Last year, we made this change to optimize those user calls away for built-in calendars and time zones. And so now the original reasoning of we wouldn’t be able to add this in the future is no longer valid. That’s not the case anymore, that we wouldn’t be able to add it in the future. So that’s another reason why we are proposing these for removal.
+
+PFC: We’d like to remove all of the getISOFields methods. We originally added getISOFields as a convenience for implementers of custom calendars because you may have a date in a certain calendar, and you need to get the underlying fields in the ISO 8601 calendar in order to get the calendar calculation. When we made that change last year of optimizing built-in calendars and timezones, you know, we discovered another use for this method. You can also use it to check if the calendar or time zone is built-in or custom. So without custom calendars and timezones, neither of those use cases make sense anymore, so there’s no reason to keep these methods. There are six types that have `getISOFields` methods, so that’s a savings of six methods. You can still get the ISO fields by changing your calendar using the withCalendar method and looking at the fields on there.
+
+PFC: We would like to collapse all of Temporal’s valueOf methods into an identical function object. All of these methods, all of the valueOf methods on Temporal types, do exactly one thing, they throw a TypeError, they do not do a brand check because if you failed the brand check, you would throw a TypeError anyway. So literally, the only thing that the spec text for these methods says is throw a type error exception. You know, there are eight of these methods. They can all be the same function object, and this function object would even be reusable by future proposals that wanted to have a valueOf function that throws, such as Decimal for one. The only observable change here is what you can see in this code sample, where the valueOf method of one Temporal type is identity equal to the value of method of another Temporal type. Another case where we figured we could do this same collapsing is the toJSON method. Basically, all that the toJSON method does of a Temporal type is call toString without any arguments, but without observably looking it up. So what we propose is changing these to, again, an identical function object that’s identity equal on different types that does a brand check and then switches on the brand and, you know, does the appropriate toString call but without observably looking up toString. Another option we considered and discarded was to look up and call toString with no arguments. That would have made the body of this function simpler, but would also introduce an observable lookup.
+
+I will pause here and relay some concerns from ABL about implementing these method collapses in Firefox. It sounds like it might be inconvenient though it is not impossible, because you need to do more manual work to connect everything up. I don’t know the exact nature but it sounds like intrinsic %ThrowTypeError% has some implementation problems because it is hard to get right. And there are a couple of questions he brought up: (1) should we recommend that future proposals use these as well? I mentioned that would be possible but do we want that to be recommended? (2) suppose we wanted to uncollapse these in the future so they would be distinct functions, it would be good to get consensus as part of this on whether that would be acceptable or not to do in the future. So that we know exactly what we are signing up for.
+
+PFC: All right, on to the simpler removals. And we are proposing to remove the subtract methods. We have add methods and subtract methods. `add` adds a duration to another type and `subtract` subtracts a duration from another type. You can achieve the same result by adding the negation of the duration that you are going to subtract. Just like A - B is the same as A + -B. This is obviously less ergonomic. If you want to subtract you will look first for a method called subtract. But you know, for all the reasons that were said earlier, we suggest removing it for now and investigating if there is more demand for the ergonomics to bring it back in a future proposal.
+
+PFC: Same thing we are going to do for `since` methods. Our types have `until` methods and `since` methods and there is a similar relationship between the two where one gives the negation of the other. `until` takes a difference between the two objects of same type and you write `a.until(b)`. And `since` has the opposite sense, and so you can see in the code sample here, `a.since(b)` can be replaced with `a.until(b).negated()`. You could also consider replacing it by `b.until(a)`
+and that will work for time units (which is the default) but not calendar units. We propose removing for the same reasons as `subtract` because it is a method that is a version of another method but goes in the opposite direction.
+
+PFC: There are two methods named `withPlainDate` on PlainDateTime and ZonedDateTime. We propose removing those. It is a convenience to create a new object of the same type with all of the calendar units or all the time units replaced at once. We expect withPlainTime to be more commonly used than withPlainDate, withPlainDate is basically for symmetry. And also, if we were to remove withPlainTime, it would be less convenient to work around it with a property bag like you see here in the code sample. withPlainDate is relatively easy because you only have to replace year, month and day.
+
+PFC: Another thing that we are proposing removing is two methods from PlainTime, `toPlainDateTime` and `toZonedDateTime`, which allow you to combine a plain date with a plain time and optionally a timezone if you are having a time date object. There are two ways to do this, you can use the PlainTime method and supply a date or can you use the PlainDate method and supply a time. So those are again two methods that do the same thing, but in opposite senses for convenience, and you don’t necessarily need to have this convenience. Instead of adding the date to the time, add the time to the date. So really the only drawback of removing these is that if you are looking in your autocomplete pop up in your IDE, you type `.to` and you will not see `.toPlainDateTime` or `.toZonedDateTime` and that is relatively small cost to removing two methods that do exactly the same thing.
+
+PFC: We have exact times such as Instant and ZonedDateTime and we define those in terms of time elapsed since the Unix Epoch. And in order to examine those we have four properties, epochSeconds, epochMilliseconds, epochMicroseconds, and epochNanoseconds. Millisecond APIs are very important, because that is how you interoperate with legacy JS date and many APIs around the web, and in other libraries. Nanoseconds APIs are important because that is the resolution of the type and nanosecond is the granularity in which we count the elapsed time since the Epoch. Seconds and microseconds are not used as much, so we will propose to remove those because you can calculate them yourself. It is easy to round milliseconds to seconds; you divide by 1000 and do Math.floor. Nanoseconds to microseconds is a bit more difficult because they are bigints, which have no floor division, there is only truncating division, so you need to be careful when you are dealing with exact times before the 1970 epoch when the number of microseconds and nanoseconds are negative, as you can see in the code sample here. We think that this is going to be a bit of an annoying code snippet that people will have to copy around, but there are not that many APIs that use microseconds and so it seems like another candidate for adding back in a future proposal if there is demand for it.
+
+PFC: Another thing that we are proposing to remove is the methods that let you convert directly from PlainDateTime and ZonedDateTime to PlainYearMonth and PlainMonthDay. You can still accomplish this by converting via an intermediate PlainDate. We think it is probably pretty uncommon that you want to go directly from a type with full date and time to a type of only month and year. We figured these will not be missed very much and if you miss these, there is an easy alternative to reach for.
+
+PFC: There are four places where we had pairs of methods where one was suffixed with ISO and returns an object with the ISO 8601 calendar, and the other one has a required calendar parameter. We’ll propose removing the latter. Specifically, remove Instant.prototype.toZonedDateTime, Now.zonedDateTime, Now.plainDateTime, and Now.plainDate, while keeping Instant.prototype.toZonedDateTimeISO, Now.zonedDateTimeISO, Now.plainDateTimeISO, and Now.plainDateISO. We had these for a good reason which was to make sure that you did not get a didn’t get an ISO calendar date when you wanted a human calendar, but they have caused confusion. Like most other things we are removing, they have an easy replacement like you see in the code sample, instead of supplying the calendar directly you call the ISO method and call withCalendar, which is hardly longer.
+
+PFC: All right then, finally, we are proposing removing the relativeTo option from Duration.add. This does not remove any functions but removes a lot of complexity that we think is not going to be often used and this particular functionality has had normative bugs in the past. So rather than keep spending the resources on wondering if we have bugs in something that is unlikely to be used a whole lot, we will just remove this option. There is an easy workaround with add and until. So instead of performing addition of two durations and passing the relativeTo option, you add the first duration to the relativeTo and add the second duration and take the difference with the original relativeTo.
+
+JGT: So that was a lot. And I think we are probably going to have a lively discussion here. One thing that we did set up some time boxes to hope that we get through everything and if you add up all the numbers you realize it is a lot more than 48 minutes and we did it like a plane flight and that it is overbooked and some will be coming in little under. So can we get help from somebody who is not a temporal Champion to be our stop watch to help us enforce the time boxes?
+
+CDA: That is our job.
+
+JGT: Okay great. Thank you so much.
+
+CDA: I presume we are starting on A right now?
+
+JGT: In a moment but before we dig into these: we have had six months to go through the stages of grief about taking all of these things out and we are asking you guys, it is sort of a tall order, right? To go through it right now, and so, one thing that helped me, helped all of us in understanding this, this is not a permanent thing. We are simply trying to figure out how do we stagger the use of functionality so we can ship Temporal at all and that the most important thing here and this is something that has been driven home to us and Google, and it is just really hard to ship things in a big company and incredibly hard to ship big things in a browser and we are asking for your help to get the first version of Temporal out of the door to make things better later and what we would choose to do, when you add them altogether and it represents others that SYG and others that we can ship it and so with that, let’s go to the queue.
+
+PFC: Looks like we have nobody signed up for C, H, I, J, and K. So we will talk about A, B, D, E, F on the left side and G and L on the right side, sounds good?
+
+CDA: I think we have some general non-letter specific commentary as well. So let’s just jump right into it. Rob are you there?
+
+Prepared statement from SYG:
+
+> V8 strongly supports this scope reduction and thanks the champions for being so open to late-stage simplification. V8 will take all the reduction it can get: the smaller the proposal, the higher the likelihood it can be implemented and shipped. That said, we don't feel super strongly about any particular method's reduction. For example, we see where developers are coming from for wanting `subtract()`: it is a ubiquitous convenience, even though subtraction everywhere else also means addition of the negation.
+
+NRO: Thanks for doing this work, I see different removals and even the great implication could say this is actually not very good and this is good and I would expect removing calendars which would be good and other matters not much and I have absolutely no idea of how much you can save by managing this method. So it would be great if the implementation could help us understand this.
+
+DLM: It is hard to say offhand, like I have an item in the queue later on but I’m skeptical about squashing valueOf and toJSON together. All I can say is that the state of implementation in SpiderMonkey it is relevant to experiment with anything that is removal and anything that adding functionality will require more work and I think one could experiment in a way some reduction off each other and with that being said, these reductions where they are coming from us and it is valuable to get that feedback from SYG.
+
+JGT: That is one of the things that we have learned from this process that a lot of these things are engine-specific. And so functions are expensive in V8 and that is not the case in SpiderMonkey so I would want to follow up for a more formal response by SYG. But you know in any order of few K per function and in terms of code size and there is some RAM cost as well, and so again, if it does not seem that much but you add it up 96 times and so SYG has like 200K per year. This would be like several years worth of their current budget expended on just Temporal which would obviously make them unhappy.
+
+SFC: So I think SYG says everything that we need on this topic but it is both per context memory use and binary size that is important here as far as I understand it. And you know function removal is the biggest way to improve those metrics.
+
+PFC: Move on to the specific topics and we have about 45 minutes and the time boxes are a bit optimistic so let’s take half of the time box for each one.
+
+JHD: Yeah I mean, so this is about the calendar stuff. One of the points and I don’t know if you can put the slides back up while we are talking about this. One of the points you made was about—where is it? I believe this requires monkeypatch, and a lot of JavaScript developers and outside of TC39 has stopped doing, and you know disrupted a lot of proposals on our own and I think that is a huge downside. And separately, I think that the idea of having the Calendar class and the TimeZone class is that they serve as a non-string identity for a thing. For a place where like for a time zone and for a calendar where future data and properties and methods and accessors can be added as needed, but also it helps avoid making a program stringly typed. In other words, just using strings all over the place and making all of your branching based on that. And it feels really important to me to be able to have authoritative object identity or shared prototype for ideally with internal slots for a calendar and a time zone. These are important concepts that merit a primitive. I feel like these two classes constitutes a lot of methods, and you are trying to remove—you are trying to kind of count numbers on the score boards of how many functions you can take off to appease the feedback, I get that. But so it would be unsympathetic specifically for calendar that there is a possibility of a better design if it is deferred now. And I think that is the strongest argument in favor of it. Setting aside the implementation feedback. But it feels—yeah for maintainability and the code base for you know type systems for googling, for learnability, it feels much more usable in API where it is not stringly typed.
+
+DE: So the thing is we already kind of are living in the stringly typed world even without the removals, both through Intl which supports the timezone and pass in strings, and Temporal because it does accept strings in calendar and timezone and it is optimized for this case. If you pass in a TimeZone or a Calendar object it will go through kind of complex, and suboptimal paths in the spec. So, I would want to hear about benefits besides conceptual or typing in this, given that we are already also incurring the cost of string base pads and reason for that is to both minimize the overhead of always allocating this extra object with an extra identity with each instance which adds up to a lot, as well as to make sure that code works reliably, although you are reliably calling the original methods and sort of matching developer expectation. Any thoughts on that?
+
+JHD: I don’t think anyone would be surprised to learn that I prefer to avoid observable lookups and I value the simplicity of the string over the object in the case to the case of the actual spec case performed, I am not arguing in favor of the full protocol or favorability and passing it as strings is fine when the strings represent a canonical object. And when there is a transformation between them because then I can use the string and say this is a calendar string, now I want to make a real calendar and do stuff with it, but the string is the serialized format. And in this case, serialized not for the wire but passing it to functions and what not.
+
+DE: So what should we do given that neither the current state of proposal with calendars included nor the proposed state with the calendars excluded will meet your goals there?
+
+JGT: Could I maybe jump in there, and that I think JHD, you mentioned that we have a much better design that is possible for both of these features and if a future proposal so can we defer this discussion until that future proposal? Which we are pretty certain there is going to be community demand in time zones in particular and calendar. But for time today, could we have this discussion later?
+
+JHD: I would say if enough of us, myself may or may not be included, are convinced that it will be possible to add a better designed thing in the future then yes, but then if we add the strings now and don’t get to adding a better designed thing that has an ergonomic adoption path from the initial version of the proposal, then I think that would be a very bad outcome. So I am not currently convinced and I am being careful of not saying I am not convinced but general convincing of that is what is important.
+
+CDA: If we can, I would like to get to Shane’s comments, and then move on to the next topic.
+
+SFC: Yeah, so, um you know just in case this does not occur other people and non-remaining calendars is remained supported with this change, and that is the most important thing from my perspective coming from Intl. I used some use cases for custom calendars in preparation for this slide. And basically all cases that I found like were better served by working with CLDR, and then you have the calculation, and you have formatting and you have the support S and other ecosystems. So therefore, I think that you know I look forward to possible future proposals about this but I think that for Temporal it is not a necessity.
+
+JGT: One other thing, PFC has done some work here around actually preparing for this, recognizing that composition seems to actually be effective, certainly compared with monkey-patching which will now be so hard that honestly, nobody is likely to do it. Whereas composition of things seems to be a much more reasonable approach, so that is something that we have learned as part of this that we feel pretty optimistic that if you want custom calendar and time zone behavior, you can compose pieces of Temporal to help you make that work.
+
+DE: So I am not sure we should restrict this to ten minutes. This is an important area. The topic that JHD raises is important and can we have the calendar and so when they were first added, we were always calling the methods on those things. And we thought that if we added it later, then that would be inconsistent so we had to add it in the first place—Philip are you queuing up to correct me?
+
+PFC: Yeah I want to say maybe JHD could clarify but I don’t think his position is so much about custom calendar APIs, but about having a reified calendar and time zone object, even if it can’t be customized.
+
+JHD: That is generally correct, I would rather ship a reified placeholder now then custom objects can be added in the future. I am concerned about delaying the reified form.
+
+DE: Just to finish my point, when we made the change later to call the original intrinsics for when you are using built in calendars, that changed the calculus of whether we could add custom calendars later and we proved to ourselves that yes we could and that is what made this a strict removal than a change in semantics and I think custom calendar and custom API should be used for Intl and the level this was at the wrong level conceptually.
+
+PFC: And I can go so far to say that, I’d need to verify this at a later point, but I think if we now had a reified calendar object that does not call custom methods, that would make it more difficult to add that in the future. So my recommendation would still be to support only the string if we are not going to support custom calendars.
+
+DE: So any particular—the current spec uses whether you have instance of the object as its cue if it should call the methods or not and we use the object and did not call the methods, where would we go next and that is unclear.
+
+JGT: That is a reasonable point is that the entire purpose of the calendar class is to create custom calendar and there is no functionality in 7 years on working on this proposal other than helping people build custom calendars and that is why the removal is tied together. For time zones there is one thing you cannot do, which is to find the next or previous time zone transition. We have been working on that and we found that it's more ergonomic to put it on ZonedDateTime instead of TimeZone, and I am pretty confident that the functionality that exists on calendars and timezones can—without ability to do custom, we have not found the use cases there and if we do find use cases we strongly agree with JHD we need a home for those use cases but for today, we have not found them yet. Can we move on?
+
+CDA: We have about 25 minutes to get through a long queue. JHD?
+
+JHD: I said my piece on timezone or calendar and I think the feedback would be the same so we can move on.
+
+SBE: I come from time zone as a prospective consumer. Custom time zones are a pretty critical use case for supporting iCalendar which is the predominant standard for exchange of calendar data. And iCalendar requires that—there is no provision in iCalendar for referring to a common time zone database. The time zone is encoded within the format. So custom time zones are pretty critical there. I do think a database sort of declarative format would be an improvement, but my concern is that I think that the statement was made that removal, not redesign, was the foremost goal here. But I think that a removal of this scope is sort of a de facto redesign. And it is very important as a potential consumer of this API that we be clear that it’s possible, not only to add custom time zone support in a future proposal in the desired shape, but that it be possible in the meantime to kind of backfill that functionality into the current shape of the API without losing functionality or having to layer additional semantics from types from the API.
+
+JGT: That is really a big challenge. And I think particularly in the case of time zones, I am now convinced that the current time zone design is a bad one. And that we can use—especially for the iCalendar case and because there are existing standards and with jsCalendar there is the same schema that moved into idiomatic JSON. We would love to start a proposal now, to go build custom time zones for Temporal based on the exact schema that you are depending on. So, if you are interested, I think we would love to work with you on building that proposal literally starting tomorrow. With that being said I think it would be a mistake to take a design that we don’t think is the best one, and keeping it in Temporal while it is too big; would be short-sighted and this is an API that we have to live with for decades.
+
+SBE: I want to clarify I am not suggesting that we should ship existing implementation, I do think that taking time to do more appropriate declarative of that would be a net positive and my only concern is making sure that the remaining shape of the API is conducive to a proposal.
+
+PFC: I want to say thank you to Sean here, we talked a couple of days ago, and he explained the iCalendar use case in more detail and if you are interested, I have a PR for the Temporal cookbook up on the proposal-temporal repo with a class that implements iCalendar time zones based on some test data that Sean gave to me. The class is composed with Temporal.ZonedDateTime and you can see how that looks with the remaining API in Temporal.
+
+SFC: I want to say data driven APIs are the future. I think they are better and I would like to see more of these.
+
+JHD: Just to clarify, a question, the shared valueOf method is going to be a distinct identity from the existing %ThrowTypeError% intrinsic?
+
+PFC: It is currently distinct but if you have a good reason for why it should not be distinct, I am fine with that.
+
+JHD: I don’t know if an implementation is capable of writing in a function that throws an error with a different message depending on what the receiver is. So it is fine to have an intrinsic, and I was curious on what was current, thank you.
+
+PFC: I experimented with collapsing methods in V8 earlier this year, and that is possible. I am pretty sure SpiderMonkey is capable of that as well. I’m not certain about JavaScriptCore and the other engines.
+
+SFC: So the shared prototype proposal is basically HasStringLikeSerialization. It is a prototype that could be used by other objects. That's one way to conceptualize it.
+
+DE: I just wanted to clarify because Mark had an item on the queue previously, about replacing toJSON with toString. The reason why not was not obvious to me at first. It’s that toJSON is called with a particular argument and toString may interpret the argument for something and we need to make sure that toJSON does not misinterpret its argument. That is why it has to be a distinct function.
+
+DLM: I am a bit skeptical with this one because unlike the other items, this is not a pure removal. It requires a change to implementation and I have not had a chance to review the comments that ABL raised but I do trust him and if it could be a source of problems and errors in the future, it seems it would be something good to avoid. So I guess my request is to defer on this one until we have tried all the other ones. If this is the last thing we need to do to make Temporal ship, then sure but otherwise I would like to see us hold off.
+
+PFC: That sounds fair enough to me.
+
+DE: Subtract is kind of nice ergonomically. Okay Shane you can rebut that comment.
+
+SFC: For subtract besides this is compelled by the size reduction concerns, and then the usage concerns and so that motivates the proposal but’s itself, and additional thing I have had a lot of experience with developers being confused with negative duration and they think that they are always positive and if they see `.add` you—you can subtract a negative duration to go into the future and you can add a negative duration and go to the past, there are two ways to do the same thing and I think this is a foot gun for developers and I support this removal not only because you know it improves the metrics for V8 but it is clear to do the right thing.
+
+JGT: Can I suggest that just looking at the temperature of the room that we have heard also on the repo concerns about subtract and we are hearing concerns about subtract here and in the interest of time if we should just withdraw removing subtract? And move on? (no) Okay.
+
+NRO: So we can still do negative duration.
+
+JGT: Yes.
+
+ACE: I think removing subtract on my graph of value-add and how often I will use, it is in the quadrant I will use this way more than the others and removing it cuts down much less than cutting down other things and I see the point you make Shane about adding negative durations, and like NRO said, I don’t think this will reshuffle it because we still have to think about negative duration and I think people can learn adding negative numbers. And so would like to keep it.
+
+CHU: I think it is the date level we use today and they are offering subtract methods and I would rather keep them instead of using negative durations and I have the feeling, I mean does it reduce the complexity of the—because internally aren’t they just using the date to the negative durations. So I don’t see so much benefit in removing them.
+
+LCA: So for the complexity, it was not implementation complexity, but there is a fixed overhead for every function that it adds and this adds several more functions. But with the other comments I completely agree.
+
+SFC: Yeah to clarify on that and these functions do the replacing code internal and it is the definition of bloat because it brings no additional value. There is ergonomics, for that I will argue it is negative for ergonomics and there are people that can disagree and that is fine but yeah.
+
+MF: Moving on from subtract, to since and until. I think it is reasonable to expect that somebody can understand subtract would just be adding the negation, and for them to know those identities and just figure that out and then use that in the absence of subtract. I don’t think it's the same for since and until. I think this identity is hard to discover and apparently not an identity. Not as simple at least. And I don’t want people to have to be experts to know how to not—it seems—
+
+JGT: You mean removing since.
+
+MF: Yeah and to get the functionality back by using the until method.
+
+JGT: So in the comments and champion group we heard one interesting feedback that several developers were confused by “since”, and from an English standpoint that is clear—the developer I am thinking of said “I never really understood what since meant” and it was harder for them to grasp. One of the potential benefits of removing since is that it has one way to do things rather than two ways to do things.
+
+NRO: Same with what Michael said I was confused but that was short—there were some other people confused on Matrix, so can you identify why since and until are not reversible.
+
+JGT: It has to do with the month. When you are moving on from one month to the next and say you start on the 31 of January and you want to measure the distance to February and February has only 28 days and there is a clamping that happens at the end. That is necessarily not reversible.
+
+PFC: Yeah, that is exactly it. They are equivalent if you are not dealing with calendar units. If you are dealing with calendar units, you can get clamping.
+
+JGT: The default options do not give you clamping. You have to opt in to the calendar unit to dig out the problem; you need to work at it.
+
+PFC: Another way to look at it: the until and since methods use the receiver as the reference point for the calculation. So, if you switch the order from argument to receiver and receiver to argument, you are using a different reference point, whereas if you switch the method and then take the negation, you are using the same reference point. I don’t know if that makes it clear?
+
+NRO: If this event happened say X time ago, am I looking at doing since or until?
+
+JGT: It depends on how you are thinking about it and if you use until, you would say “that event until now.” Or if you use since, you would say “now since that event.” Does that make sense? And this discussion is an example of the exact confusion that we are in repo and because there are two ways to do things that are similar enough, and my current thinking is that it will be easier to teach developers there is one way to do this and you start at the receiver, and you go forward. And then it would be to continually explain the difference between the two similar ways to do something that are opposites.
+
+SFC: Yeah also in the queue and I agree with everything that was said. One way to do things is better and the definition of API and so since is not equivalent to reversing the arguments and one way to difference is that it is there.
+
+DE: Yeah I think this discussion makes clear that keeping both forms also does not help non-experts get this right. And the main defense that we have is I forgot what it is called, that balancing parameter by defaults is the same thing will not refuse to be ambiguous.
+
+RGN: For anyone else who is interested there is a concrete example in Matrix #tc39-delegates:
+
+```javascript
+Temporal.PlainDate.from("2024-06-30").until("2024-08-31", { largestUnit: "months" });
+// => P2M1D
+Temporal.PlainDate.from("2024-08-31").until("2024-06-30", { largestUnit: "months" });
+// => -P2M
+```
+
+LCA: This was the thing we got the most out all of these things here, and I looked through internal code that does not use Temporal but use some Rust date libraries, and yeah, we have not a single use of until and many-many uses of since, like tons of usage of since and not a single use of until, so I don’t know. It can’t be that confusing.
+
+SFC: I definitely strongly support this and I don’t agree with the code that is on the slide. I can follow up later. About the code because that code is not calendar safe.
+
+JGT: Good point.
+
+SFC: But I agree with the change. Just not with the stated code.
+
+DE: The question for LCA, does the `since` method you are using use this particular calendar month balancing operation?
+
+LCA: I do not know.
+
+DE: Okay, so it sounds like it doesn’t use since/until in a meaningful way because that is the difference that we are talking about. Maybe it is about argument order but that is the superficial part.
+
+LCA: I don’t understand that argument either because if there is a difference between these two, that assumes that there is a reason for there to be a difference between these two existed at some point because there was a reason for it to exist.
+
+DE: You can get the other behavior by switching the arguments and doing the negate. But the only time that you care about that is when it is in this particular month calendar case.
+
+JHD: Yeah, it is just a quick comment on letter L and it seems fine to remove it. But the code example that you have shown is not complex. I think the reason was explained on Matrix that there is other complex that will make it complex and the transformation that you showed on the slide is because there is no other options, is that right?
+
+PFC: The transformation is here on this slide. It is kind of a complicated answer and I will summarize it as quickly as I can. It looks like a simple replacement but what actually happened in the `Duration.add` method internally is that there were three different code paths that it could go through. One path for if you don’t pass relativeTo, one path if you pass a PlainDateTime relativeTo, and one path if you pass a ZonedDateTime relativeTo. So removing the relativeTo option does actually remove those extra code paths in favour of the until methods of those respective types. So this actually removes a place where control can go through three different paths and there is a certain amount of duplication.
+
+JGT: Complexity is internal and it was very complex and we would be happy to see it in the rear view.
+
+JHD: So my next item should be last to make sure nobody else wants to talk.
+
+JGT: The queue is empty.
+
+JHD: Okay, so I wanted to start to talk about overall process concerns with Temporal. And it has been in stage 3 for 4 years. And certainly stage 2.7 is brand new, and so that is why it has been stage 3 and not 2.7. The rate of change was slowing down and it even had its first meeting in 4 years this year without normative changes, and it was looking good, and now there is a lot of change. And we should still make it because—we should still consider these implementation concerns because they are still requesting it and regardless of statement the implementation does not—your slide has two things, code size and complexity, something like that. Yeah, so complexity and code size like neither of these things was hard to predict in the years prior to stage 3, and we did not need to implement to know it was going to be a lot of stuff. Certainly you needed to implement if it was going to be within the specific limits and these concerns I would hope in the future these kinds of concerns are brought up much earlier in the process when things are still being designed. And as you explained, the sort of corporate rationale for why the person concerned with the size may be different than the person in TC39 and I hope those things are coordinated in the future. And then the other thing is, we are all very interested in having Temporal to get to shipping and browsers as fast as possible. I really wanted that, and I have wanted that for many years, and with that said, we have tried in recent years to try to make the stage a proposal is in, actually match the reality of where it is. And it seems like it might be worth considering that Temporal belongs in 2.7 until these sort of changes are finished and so the move back to 3 can actually signal to the world that it is ready. Whereas like everyone week I feel calls in and I feel questions is lacking and I think it is simple for everyone to say once it is stage 3 again, it is ready.
+
+JGT: We are up on the time limited and that would be a relatively long discussion item. So I am willing to have it but I would love to see if we can get consensus on which of these A-L has consensus today so we can actually start executing and then if there is time, to talk about stage. So we would like to go through and I don’t know the right way to do this process wise in a show of hand and asking for objection and we will go through A-L. And do we need to get explicit support for each one or should we?
+
+CDA: It would be great to be acclamation for the entire batch of everything. If we cannot do that, then –
+
+PFC: We are going to defer D until we get a chance to consider feedback.
+
+DE: Are we going to defer E as well?
+
+JGT: I think it would be more effective to go letter by letter and I don’t think we are going to get acclamation for everything. And I think—We start with A? Any objections to removing Temporal.Calendar and user calendar?
+
+JGT: B? Temporal.TimeZone and user time zones? Okay.
+
+JGT: For C, I assume there are no objections because C is only there for the purpose of A and B; any objections?
+
+JGT: D we just agreed to remove or to defer until Mozilla has a chance to investigate.
+
+JGT: Do we have objections to removal of subtract? I see a few hands here.
+
+SFC: Can we write down who those are for the purpose of notes? These are all recommendations from the champions group. State the names so they will appear on the notes.
+
+DE, LCA, and CHU, ACE object to removing `subtract`
+
+JGT: Removing since? Any objections?
+
+NRO: Can you remove since giving it is a single method and that could happen synchronously—never mind.
+
+??: No reduction –
+
+DE: It is not about a specific thing. I don’t know if there’s any further question to ask.
+
+CDA: RBN prefers to leave since.
+
+RBN: What is described to me about the asymmetrical nature of `a.until(b)` or `b.until(a)` vice versa and it is asymmetrical because the start and end and since has a value because people are likely to reach for the wrong thing and since will give them easier way to do the right thing.
+
+CDA: Okay. Objection to since.
+
+JGT: One thing we should consider in the future is remove until and leave since if that would address some of those concerns.
+
+CDA: We may be able to revisit since() later so let’s continue.
+
+JGT: Any objections for withPlainDate? For all of the remaining ones, these are less contentious, are there any objections to G, H, I, J, K or L?
+
+JGT: Okay I think we did it.
+
+PFC: All right I guess we are a couple of minutes over time, and that remains to ask the chairs if it is possible to discuss an overflow item to discuss JHD’s topic.
+
+CDA: We will do our best, but I cannot promise.
+
+JGT: Of A-L, D is deferred and E and F have not been removed and rest has consensus. ABC and G through L have all reached consensus and D is deferred and E and F did not get consensus. And thank you all for slogging through this, for the discussion, we are grateful for your time and your brainpower and feedback. It has been a long road and hopefully together we can ship this thing.
+
+PFC: Thanks everyone. Let’s give it back to the chairs.
+
+CDA: Thank you and we are past time and I understand that you are going to be ejected by security from the University very soon, so we will see everyone tomorrow.
+
+### Summary / Conclusion
+
+- Consensus reached on fixing an arithmetic bug, https://github.com/tc39/proposal-temporal/pull/2838.
+- Consensus reached on removing Unicode minus signs from the ISO string grammar, in https://github.com/tc39/proposal-temporal/pull/2856 and the corresponding ECMA-262 change https://github.com/tc39/ecma262/pull/3334.
+- Consensus on removing Temporal.Calendar, user calendars, Temporal.TimeZone, user time zones, getISOFields() methods, withPlainDate() methods, PlainTime’s toPlainDateTime and toZonedDateTime methods, epochSeconds properties, epochMicroseconds properties, Instant.fromEpochSeconds, Instant.fromEpochMicroseconds, PlainDateTime and ZonedDateTime’s toPlainYearMonth and toPlainMonthDay methods, Instant’s toZonedDateTime method, Now.zonedDateTime, Now.plainDateTime, Now.plainDate, and the relativeTo option from Duration.add.
+- Before collapsing valueOf and toJSON into identical function objects, Mozilla would like to investigate how feasible it is on their end. We will bring this item back to the next plenary.
+- Removal of subtract() and since() methods did not reach consensus.
+- JHD considers it important to have reified Calendar and TimeZone objects so that we don’t have “stringly-typed” APIs and functionality can be added to them in the future. This can be considered for a future proposal, especially if new, non-reentrant designs can be taken into account.
+- SBE considers user time zones an important use case and encourages work on a proposal to re-add them using the iCalendar data model.
+- While there is no consensus to change the stage of Temporal, JHD considers that complexity and code size could have been evaluated much earlier.
diff --git a/meetings/2024-06/june-13.md b/meetings/2024-06/june-13.md
new file mode 100644
index 00000000..8d5fa42a
--- /dev/null
+++ b/meetings/2024-06/june-13.md
@@ -0,0 +1,1342 @@
+# 13th June 2024 102nd TC39 Meeting
+
+**Attendees:**
+
+| Name | Abbreviation | Organization |
+|-------------------|--------------|---------------------------------------------|
+| Waldemar Horwat | WH | Invited Expert |
+| Chris de Almeida | CDA | International Business Machines Corporation |
+| Jirka Marsik | JMK | Oracle |
+| Richard Gibson | RGN | Agoric |
+| Jonathan Kuperman | JKP | Bloomberg |
+| Daniel Minor | DLM | Mozilla |
+| Nicolò Ribaudo | NRO | Igalia |
+| Chengzhong Wu | CZW | Bloomberg |
+| Keith Miller | KM | Apple |
+| Michael Saboff | MLS | Apple |
+| Duncan MacGregor | DMM | ServiceNow |
+| Ben Allen | BAN | Igalia |
+| Ron Buckton | RBN | Microsoft |
+| Christian Ulbrich | CHU | Zalari |
+| Jesse Alama | JMN | Igalia |
+| Aki Rose Braun | AKI | Ecma International |
+| Samina Husain | SHN | Ecma International |
+| Sergey Rubanov | SRV | Invited Expert |
+| Istvan Sebestyen | IS | Ecma International |
+| Mikhail Barash | MBH | Uni. Bergen |
+| Romulo Cintra | RCA | Igalia |
+
+## Decimal for Stage 2
+
+Presenter: Jesse Alama (JMN)
+
+- [proposal](https://github.com/tc39/proposal-decimal)
+- [slides](https://notes.igalia.com/p/june-2024-tc39-decimal)
+
+JMN: Good morning, everyone. This is Jesse with Igalia. I am working decimal in partnership with Bloomberg and would like to give you an overview of the status of the proposal. I’ll focus on some of the changes since last time. My intention is that this presentation is a kind of “diff” rather than a full presentation of decimal, since, as wel know, decimal has been presented here a number of times in the last year or so. So I assume that a large majority of people in this room are familiar with it.
+
+JMN: But just to give you a very brief high-level overview of what we are talking about here. Decimal proposal is adding exact numbers to JS. With trying to eliminate, or at least significantly reduce, rounding errors which are frequently seen in JS’s binary floating-point numbers, especially when handling human numeric data. Money is the main example of the kind of data we are talking about. To phrase this just in terms of a kind of a punchline: “For the love of (fill in the blank), can we pretty please make 0.1 + 0.2 equal 0.3?” We are supposed to get things right there.
+
+JMN: I also want to tell you about the data model. Again, not in too much detail, because I have presented this before. As a reminder, we have explored various options for modeling decimals. It’s an interesting space because there’s not a unique solution, but rather different ways to validly think about these numbers and define operations. We considered rational numbers and arbitrary-precision numbers (under the moniker “BigDecimal”). We considered fixed-point decimals. A fourth option has been around for some time now: IEEE 754. Which is the basis, of course, for JS’s binary floats. They have a part of the standard called Decimal128, in addition to Binary64, which is what we’re using for JS numbers. Decimal128 uses fixed bit-width (128-bit) decimal numbers. It gives you a huge range of possibilities here. We can accurately represent decimal numbers with up to 34 significant digits. And we can handle an exponent, part of the floating point of about -6,000 to +6,000. (Actually, somewhat more than that.) If you think about the numbers that come up in your life, or in the life of your JS applications, my guess is that the numbers that have anything to do with human-readability or human-consumability will be in that range.
+
+JMN: The main thing since the last time I presented decimal is that I have been working with WH on getting the spec text right. The main thing that was missing last time, when we asked to go to Stage 2 and didn’t advance, was that the specification wasn’t adequately fleshed out. And so working with WH since that last presentation, we have done a lot of work. By the way, thanks in advance, WH, for defining the space values here. It’s fussy to get this right. On the one hand, we don’t want to specify some kind of exact bit pattern for these things. But on the other hand, we don’t want to be too fluffy about these things.
+
+JMN: So we need to have some kind of robust data model here. And thanks to WH, we have a solution to this. The idea is similar to how we think about mathematical values in the spec. Which are just real numbers. We have a new class of entities in the language: we have NaN and the infinities. The point is, I am using a subscript D (𝔻) here to indicate these are the decimal versions of these things. NaN of course as it stands today is something that we know and love for numbers that are not a number. But that’s in the binary64 world. And what we are proposing is a new NaN. These are not our inventions. These are the decimal versions of NaN, different from the NaN that JS has today. Likewise, these are not exactly the same as positive and negative infinity, but new decimal variants of those.
+
+JMN: What we do is, we consider some kind of representation here of these numbers. Basically, you can think about this as some kind of mathematical value plus some kind of precision, you might say (or quanta, the technical term for these things). Idea is that we will have pairs here, lists of two elements. We have some integer there, Q. That’s the quanta. Integer with that range there. And then to represent our good old friend zero, +0 and -0, and some mathematical value which represents this kind of notion of significant digits within range so you can see, for instance, we have a couple of inequalities there. The absolute value of some independent integer is between 0 and 10**34. And the idea is that if you take this mathematical value and scale it by a certain power of 10, you get an integer. That is the kind of significant ant or BigInt. It represents all digits. This is the new class of what we call Decimal128 values. These are spec values, not JS language values. In the same way mathematical values are not JS language values.
+
+JMN: Another thing that I want to draw your attention to is some diffs in the API. One thing that hasn’t changed is we propose having a new library object called Decimal128. The name as posed is subject to bike-shedding. We can open that, but internally have been using Decimal128 for a while now. Just to be clear this is not a new primitive type and there are no new numeric literals here. There’s support for decimal NaN and the decimal variants of positive and negative infinity. The API contains basic arithmetic. Just the basics you expected like addition, subtraction, multiplication, division, and to be clear again, we don’t propose overloading of `+`/`*`/etc. These will be methods.
+
+JMN: The constructor is going to take strings. Here is something new: BigInts are okay. That’s intuitively fine. BigInts represent exact data. Right? Within certain ranges, that’s fine. Numbers are fine but work needs to be done there.
+
+JMN: Rounding is something we need to do with a lot of human consumable decimal qualities and of course that’s present. But what’s new is that we went from the 7 or 8 rounding modes of Intl.NumberFormat down to the five official IEEE 754 rounding modes. For various reasons. The idea being that implementor may not have the ones that are not official. Imagine some C library that provides decimal128, but it doesn’t just have the ones that Intl have which are outside of IEEE 754. We settled on just focussing on those 5 modes.
+
+JMN: The API will also have useful methods like toFixed and toPrecision and toExponential, similar to Number. We also like conversions ToNumber and BigInt. In the case of Number, that a lot of programmers might want to enter the decimal world from, say, numeric inputs and exit the decimal world with numeric outputs. The idea is, this is a chance for us to take the opportunity to define that in the spec rather than leaving the poor programmers to do something like I’ve described, which is common and possibly get the wrong answer. That would be a tragedy if we put in all the work to decimal, try to get things right, programmers are trying to do that again, and again they get things wrong or a rounding errors, even though they are trying to get things right. That would be a tragedy. In previous versions of this presentation, we have talked about having unequal and less than method. Intuitive, that’s fine. We talked about having some kind of compare method, which works not by mathematical value but considers the underlying exponent. Which would allow us to distinguish mathematically equal decimal values like 1.2 and 1.20.
+
+JMN: But instead, we have settled on having a simple compare method to compare by mathematical value. This returns -1, 0, 1 or, thanks to our good old friend, NaN, it might return NaN. Don’t worry, it will be possible to recover the functionality that was provided by this previous version of compare. You will see why in the next slide.
+
+JMN: Because we would like to expose the exponent and mantissa, for these decimal numbers. This was previously just completely missing. We will have a scale by a power of ten method. Because this is probably something that many programmers might want to do, for instance, converting from, let’s say, Euros and cents to all cents or something like that. The average programmer might not get this right. This is reasonable to add to the API. We make sure valueOf is throwing to prevent any unintentional mixing with plus and minus and multiplication and so on.
+
+JMN: Just to remind you about our proposed solution to one of the conundrums of, the issue of, you know, 1.20, do we expose that to programmers? We don’t do that with binary floats. We need to make a decision. Solve a problem. In the proposal here is to do a string by installment. IEEE 754 decimal128, the standard, doesn’t contain numbers. These are distinct values in the universe. There are also sorts of examples like that.
+
+JMN: And so if we want to naively push this forward with decimal 128, you are going to see this kind of thing and the question is, is this some kind of foot gun or source of confusion for programmers? The idea is to expose—to say that if we try to serialize a decimal value to a string, then what we will do is canonicalize by default and the option to turn that off. If the value is `1.20`, what you will see when you get—ask for a string is `"1.2"`.
+
+JMN: One thing that I would like to point out here is that in IEEE 754, working with these values is not a scary thing at all as we compare by mathematical value. So, for instance, 1.2 and 1.20 are, in the spec, mathematically equal and we get that result.
+
+JMN: And the comparisons are also going to compare by mathematical values. So 1.2 is not less than 1.20 in this setting. Even though there’s an extra 0 there, because we will work with mathematical values.
+
+JMN: So again, it might feel like we are entering murky water, it feels a bit fuzzy. But don’t worry about it. The point is, also, just to hammer this home even more, the mathematical operations, additions, subtraction are not sensitive to the distinctions. Adding 0.1 and 0.9 is supposed to give you 1.0. You might say why isn’t it 1? The answer is, well, it is 1. There’s just—don’t worry about that. If there’s some kind of need to add extra 0s or digits, it will be mathematically equal. It might sound scary or unintuitive, it isn’t. It’s something to embrace.
+
+JMN: There are a couple of outstanding questions for the committee. I would love to hear some feedback about these. I think these are not blockers for Stage 2. Or even a conditional Stage 2. But while we’re all here, we all love this stuff. I would love to hear some input from some of you. Here is one of the issues. Because of the good friend NaN, it’s a polluter. It makes things annoying to deal with. Less than, again, working with mathematical values, is not just the negation of greater than or equal to. Because of some extra checking whether this isNaN and so on.
+
+JMN: And what is interesting is that the IEEE 754 spec provides a full suite of comparisons. So it has—as a basic operation, less than and greater than and equals and less than or to and greater than or equal to. It even has the negations of these things, but I omitted those from the list. The question is, would you be satisfied by having just a compare method? That works with mathematical value? And then—which would mean that you would write your own less than. You will write your own greater than and equal to and so on and so totter or should they be available in the API. I don’t know what the right answer to that is. I am leaving it open. Not including that because in some sense these are a bit redundant. We were just helping the programmer with a one-liner there. I also see the reason why one might want to add these.
+
+JMN: We have settled on a fairly simple API. As I mentioned, we have basic arithmetic by design. But a couple other things we might want to think about and I have waffled on is square root. Square root is an interesting one because even though the output can be—have lots and lots of digits, it’s different from things like, say, exponentiation or the trig functions. The algorithm is straightforward. And we can get exact answers in some cases.
+
+JMN: So I am a bit on the fence about this one. I am happy to hear arguments one way or the other. I want to find out what you think. If you think about use cases, I can think of, for instance, square root computing distances in a plane or in a 3D space. So it’s possible we may need these things in our investigation of what API programmers say they want and what they in fact use, out there in the wild, we didn’t find square root. Except for some more exotic applications.
+
+JMN: Just a quick summary of where we are. We looked at quite a lot of use cases for these things. We explored a space of data models. All are viable and reasonable on their own right but we settled on one. API that meets a wide array of use cases. Not all. But most. We have spec text, perhaps we can take a look at that here. Just a sec.
+
+JMN: Here you can see in—making this a bit bigger. In the introduction we have a representation of that space, values that Waldemar provided for us here, then we also have some definitions about what it takes to be finite and 0 and the rounding modes. This table is stolen from Intl. They have a similar table there. Some technical mathematical definitions here to extract things like mantissa and so on. We have our constructor here. Where we parse strings and using a certain production. We have properties like whether a value isNaN or finite. Here is the simple API. We have absolute value. Negation. Just change the sign. Add. Subtract and so on. It goes down here. We have toString, toExponential, toFixed, so on. We also have some intl spec text here. We have NumberFormat integration. It remains to be done to work with PluralRules. As a—as I have on the previous slide here. Let me go back here.
+
+JMN: So Intl integration is mostly done. The main thing was to do NumberFormat. But that is done, although subject to revision, of course. PluralRules is next on board. We got a polyfill for all this. If you want to give this a try and kick the tires, give that a try. We still have feedback we are working on. There’s a PR there. But as far as we can see, this isn’t critical blocker for Stage 2 or conditional Stage 2. Although that’s the decision to make. And that’s it. Let’s have a discussion.
+
+WH: I gave a lot of feedback on this proposal. https://github.com/tc39/proposal-decimal/issues/155
+
+JMN: Yeah, just a moment. Here we go.
+
+WH: [scrolling through GitHub issue] I would like to quickly present some of this. There are a lot of category errors in the spec, where it calls functions passing the wrong kind of argument. A lot of that needs to be fixed. Currently, 0s don’t work in any math functions. Also, operations producing 0s will just throw. None of the rounding is implemented at the moment. The spec currently throws wherever you get inexact results. I was confused by that a bit. I thought that it might have been intentional. But I think it’s missing functionality in the spec right now.
+
+JMN: Exactly. Yeah.
+
+WH: Okay. A lot of errors in the quantum handling where the spec doesn’t conform to IEEE 754. I provided suggestions for how to do these things. I provided suggestions for how to get the canonicalization right.
+
+WH: I suggested new semantics for the `compare` method, which were what was just presented rather than what’s in the spec.
+
+WH: I am strongly in the camp that mathematical ==, <, >, ≤, ≥ should be methods. If you don’t provide them, everybody is going to make their own versions anyway. And I see no advantage to omitting them.
+
+WH: I made some comments about conversions. In the spec as it is right now, toString of 11000 produces a string of 1 followed by 1000 zeros. I’d prefer to switch to scientific notation for sufficiently large and small values. I see from the GitHub issue that you agree.
+
+WH: There are issues where passing the “no canonicalization” option to `toString` actually does “maybe canonicalization” depending on the value. No should mean no.
+
+WH: There are issues when converting numbers with how to determine the quantum of the resulting Decimal128 values. I think we agree on the solution.
+
+WH: Object identity is currently inconsistent. Sometimes methods return reused Decimal128 values. Sometimes they create new ones. Sometimes they get the quantum wrong.
+
+WH: The grammar for conversions is wrong.
+
+WH: I suggested that we provide a few methods to work with exponents and mantissas. These are not quantum extraction methods. These return the mathematical exponent and mantissa. These are not the way to get the quantum.
+
+WH: The rest are readability improvements which I suggested for the spec. Overall, we keep going back and forth and I am really happy with the collaboration. But the spec as it is now requires significant work for it to be self-consistent. I will support Stage 2 once it’s fixed.
+
+JMN: Okay. Thanks for your feedback. I appreciate the help.
+
+WH: Given how extensive the needed changes are, I think we need another round or two of me carefully going through this and fixing more things that come up. I will help with this.
+
+JMN: I put a check if there’s a fix available on a branch. I think I have a comment here about… a PR and you also have a sketch here how to fix things. Looking to diving into this more and getting these nailed down.
+
+CDA: Shane?
+
+SFC: I also had a few looks at the intl spec. I think I posted on the issues there. I didn’t receive any replies. But I posted some more thoughts, issues on the PR for the intl spec. It also needs some work. Yeah. I think I know what you are trying to do and also I can’t verify what you are trying to do because I can’t read the spec because it doesn’t work. It has the wrong types being passed into functions and things like that. Once the issues get resolved, this—the shape of this is taking, you know, I think is very positive.
+
+JMN: Okay. Thanks. I will take a look.
+
+CDA: Nicolo?
+
+NRO: Yeah. This is a question not for Jesse or any of the people, just offer the comment in general. What do we expect of spec for Stage 2? Like thanks for sharing the text. What does that mean? He asked other people and gave him conflicting points of view on what an acceptable stage 2 spec text is. I wonder if we have any opinion on what exactly this “major semantics included” means? In this case, I told Jesse, they are defined even if they are bugs. It turns out maybe it’s still not enough.
+
+CDA: Does anybody have any thoughts on that?
+
+DE: CDA, you put yourself on the queue with thoughts on that. Do you want to say something?
+
+CDA: I just copy-pasting from the process document. I don’t have—
+
+DE: Can you say what you meant by that? Is that—I thought that was an interesting comment and I wanted to hear your take.
+
+CDA: So it’s pretty superficial. NRO’s asked about the expected spec level for stage 2 and I simply copy and pasted the all major editorial issues that are acceptable to Nicolo’s point, that does leave some room for interpretation. So for the entrance criteria for Stage 2, the question being asked, does this current state of decimal the initial spec text meet that bar or not? Obviously, it’s not final. There are, perhaps, arguably placeholders, to-dos, and issues. I don’t know if these issues are—they are not limited to just being editorial issues that are still up in the air. I cannot speak to whether it includes all major semantics, syntax and APIs. Probably does. Based on the checklists we are looking at here. Although I don’t know. You could argue Intl integration, but this is for the community to determine collectively.
+
+NRO: I guess then I would like to re-frame the question. Does fixing the current spec bugs change the API shape or the actual semantics or do we agree on the semantics and fix the spec too much, what we expected them to do?
+
+DE: So I think—
+
+CDA: Sorry. Michael is—moving Michael up a little bit. He’s got some relevant comments.
+
+MF: I hope so. I just wanted to try to make a clarification of something that Chris said. It may have just been a small mistake, but we don't look to get to the point where it's just editorial issues remaining for Stage 2. This has major semantics defined. It is totally fine and very common for us at Stage 2 to have things that don't really make sense, but we have a good idea of what we're trying to do. Stage 2.7 is the stage where we need to make sure there's only editorial issues remaining.
+
+CDA: Waldemar?
+
+WH: This is not just editorial issues. The Stage 2 criteria are that the major semantics are implemented and presumably work. They don’t currently work. None of the arithmetic operations work. String conversion doesn’t work. There is no rounding. So this is not editorial changes. And as we fix these things, the semantics of user-visible behavior changes. So we keep going back and forth as to what the user-visible behavior should be. Also, I don’t think it’s particularly productive to have a meta-discussion about entrance criteria for Stage 2. I suggest a more productive use of time would be to see if anybody has blocking objections on the merits or if there are other technical things that might be relevant and we haven’t discussed yet.
+
+CDA: Ashley?
+
+ACE: So from my perspective, one data point we have solved the semantics and I agree with what Waldemar said, there are still some things in flux. And maybe this is what Waldemar just said or slightly different. I apologize either way. It’s great to know with a few things remaining, whether we are close enough that that is it. We’re—maybe some discussion around the… the—so I think—if some of the spec operations just need a little bit more discussion, and whether or not included a method or not, if that’s set of things, or if that’s a conditional Stage 2 or just like a—I think it’s important for us to kind of agree that this is, if this is Stage 2 or not because the human element of the amount of work Jesse has put into this for many years keeping something at Stage 1 for a really long time when things are so close, it’s great to know. This is actually—there’s a few things to discuss. And when we do that, we as a committee will be really happy to see this in Stage 2. I think that would be a really great point to approach and we shouldn’t get too distracted by things that aren’t major compared to if this is like a primitive or not, it is Decimal128 or BigDecimal. They are clearly major semantics.
+
+CDA: Shane?
+
+SFC: Yeah. So I like the way that this is presented in the slideshow. If we approve slide shows, I support Stage 2 for this personally because this resolves the issues I’ve been talking to with Jesse about the Intl integration in general. I can take on faith this is what the proposal will do and behave. We can discuss this further in Stage 2.7. We don’t approve slide shows, but spec. What do other delegates feel about that? I can’t verify that this is actually what it’s doing. When I look at the spec it’s not intelligible right now.
+
+CDA: Dan?
+
+DE: I think for Stage 2, the important thing is that we have major semantics points worked out and that is conceptual agreement rather than about the document. I wanted to note in this case, we have a—we have this very extensive PR fixing many of the issues. If you look in the repo. We don’t have conceptual disagreements about what the semantics should be. There are these open questions about, you know, I think pretty scoped open questions that are appropriate to discuss during Stage 2, about like whether we have square root or less than things. And so I think this kind of clearly meets the criteria. Nevertheless, maybe to avoid getting stuck on this meta discussion, because you know they are making such fast progress to actually getting the spec ready, what if we conditionally advance to Stage 2. It’s still Stage 1. Offline, Shane and Waldemar keep iterating with Jesse and it advances once they give it the, you know, a positive review because we have done this many times before, this conditional advancement. It’s part of the process. I am not sure if it needs to come back to committee to Stage 2, because of the conceptual agreements. Unless there’s other things people have
+
+RBN: Yeah. In—just following up to what Dan was just saying and I will keep this short because I agree this is a meta process discussion, but everything we have listed in the process document around Stage 2, this seems to fit. I mean, the real goal of Stage 1 is to examine solutions space and come up with the solution and Stage 2 is choosing and drafting out the spec text that is put together. We expect there’s going to be huge chunks missing. We have advanced to Stage 2 in the past with less stability what is in the proposal and a lot of the things that Waldemar is concerned about are listed in the purpose column of refining a solution, handling invalid inputs, changing some of the effects. And this feels like we are—it felt putting more rigor on what gets into Stage 2 than we have ever done in the past. I don’t know if that’s a product of introducing Stage 2.7 or anything else we might be taking up from previous discussions over the years. But it feels like we are—this is stricter than we have been in the past.
+
+CDA: Noting we have about 15 minutes left, Rob?
+
+Prepared statement from SYG:
+
+> V8 has the following questions and non-blocking concerns:
+>
+> 1. Remains unconvinced by use case but can live with API-only solution.
+> 2. What's the deal with decimal NaN and infinities?
+> 3. Prefers choosing round to nearest, ties to even and that's it. What's the use case for supporting all five? We don't for floating point.
+
+JMN: The reason why we support a couple different rounding modes, not just the single one that number supports, there are different ways of doing rounding. In finance, think about, for instance, this rounding ties to even. That’s fine. It has its use cases. But then there are other cases of rounding. Like, for instance, just kind of a truncation that’s also fine. It’s considered rounding. For instance, when I do my taxes and get a result, I owe $1.30, just $1. They don’t care about the cents. Things likes that. There’s no ties to even in that setting. The other rounding modes, presumably also have their use cases. They’re part of the official spec for a reason. That’s it. There was another question about the motivation. Right. I guess we have presented the data on this many times. I am not sure how to address that really. Maybe this is just a product of being in different JS communities, subcommittees and areas. There was another question about—about rounding. Can you repeat
+
+RPR: “What is the deal with decimal NaN and infinities?”
+
+JMN: Well, essentially, for spec compliance, to be able to handle cases when such values need to enter the system from elsewhere, that was the main motivation for that. I mean, to be honest, I myself am not a fan of having these in the language. Earlier iterations, we considered dropping them completely and just throwing when an operation would produce a NaN or infinity. In my view, that’s valid. I appreciate the considerations as well which is why they are there.
+
+DE: I believe that the rounding modes have applications in money and in finance. Which is the primary use case for this. I think we should review those use cases at a future TC39 meeting. It seems like we don’t—we don’t have those on hand and that goes to come back with. This make sense during Stage 2.
+
+JMN: Sounds great. Thanks.
+
+DMM: So my specific comment on the queue was, that you mention in the slides to compare, can return NaN, but I didn’t see a path that did that. And I was wondering which path in the spec was meant to, but I also see the spec returns true, false, +0, -1 and +1. I think it needs a little bit of editing.
+
+JMN: The idea with the NaN is in the new work in progress PR. I’m sorry. Still if the spec as you see it right now. That’s not there. That’s an older version of compare, which allows NaN and handles NaN correctly. The reason why NaN shows up in this newer version of the spec, in this work in progress, PR, is that NaN essentially pollutes things. When one of the inputs is NaN, you have to check and return NaN. You say, undefined is a valid response in that case, but yeah.
+
+DMM: I would caution against having different sets of compare results to normal floats. Are there any situations, I think it’s always -1, +0, +1. I think NaN and NaN is equal? On compare.
+
+JMN: Yeah. Good point. The spec, I mean, I can appeal to the spec. There is an answer given to that. And just to, again, say that NaN is ugly and pollutes things and complicates our thinking about that’s things, the function signatures are complicated, and…
+
+CDA: Waldemar?
+
+WH: To answer DMM’s question, the thing in the spec is an old version which was quantum-sensitive. I suggested we change that. The new version that returns the -1, 0, +1, or NaN is not in the spec yet.
+
+CDA: Shane?
+
+SFC: Is this mic working? It’s working now. Yeah. So I had another—this is not really a Stage 2 blocking concern, but other people have stage 2-related things in the queue. They can go first. [This is a comment](https://github.com/tc39/proposal-decimal/issues/12#issuecomment-2052482964) I raised a couple of months ago. I never received a response and it might be worth discussing, which is, you know, I have definitely made clear previously, on multiple cases, Intl.NumberFormat.toLocaleString returns decimals. If toString returns the decimal, the trailing zeros, that creates this inconsistency with toString and toLocaleString, which seems strange or unpredictable. And this is a topic that maybe we can discuss to see which direction. I posted possible directions in the issue. I never received a response or acknowledgement about the comments. So I don’t know if we want to discuss that now. We can do that later, but yeah.
+
+JMN: That’s an oversight on my part. Sorry. We can discuss that today and off-line.
+
+CDA: Shane, you’re next on the queue as well.
+
+SFC: Great. My next topic is I counted 19 functions, which based on what we did with Temporal is, you know, not that big. But also, not too little. So when we talk about things like less than and so forth, like that’s—the definition of API float. You can only do with compare. Compare is clean semantics. I am happy with what is in the slide show today. And like, you know, we have already pointed out how less than and so forth can be error prone because how they handle NaN and things like that. So it seems like it would be a much cleaner solution.
+
+CDA: Waldemar
+
+WH: Adding `lessThan` or `equals` operations, I don’t think is API bloat. And users will get it wrong. If you ask users to implement ≠ by using `compare`, most will get it wrong.
+
+CDA: Eemeli?
+
+EAO: So now that we’re not doing decimal as primitive, when I look at what does decimal actually do beyond adding decimal itself as a class in the methods and so on, on it, the thing that it does, that I see at least, that goes beyond what a library can do, is the integration, but mostly number format. And this to me seems like it is a key motivation for having decimal in the spec as opposed to have decimal as a library. So with respect to advancing to Stage 2, may concern is that my sense is that this has not been considered directly. It’s more like indirectly given that we’re doing decimal and it needs to interact with Intl.NumberFormat, this is roughly how it should work. And when reviewing now, again, the proposal repository and the issues and pull request there, I don’t see as much discussion and convincing happen that what is the benefit of decimal being more than a library with respect to in particular to how it interoperates with Int.NumberFormat and then furthermore more directly, addressing the questions that given there is an interest to have this operation with Intl.NumberFormat, what are the questions being answered there? What are the problems that are being solved by the interoperation with Intl.NumberFormat and whether doing it via having sting like decimal as a thing in the spec is the rating approach or whether it’s more appropriate to consider thing with smart unit and considering there, whether it makes sense for something like Intl.NumberFormat to be able to process some richer entity than just the number, a values formatting and whether some solution around smart units, for instance, could also be a solution for the parts of the decimal proposal that are integrating with Intl.NumberFormat. And whether overall, this is the right approach for solving the issues that decimal is approaching. And my request here is that it would be really nice for this discussion to be more direct, rather than indirect, about what are the linkages with the NumberFormat.
+
+JMN: Yes. Thanks. I have to say this is a part of the proposal where my thinking has evolved. Initially, I was thinking on the side, but more and more I see it how you describe. Something that really motivates the case here. But then there’s an obligation to make that clear to anyone who will [tcif] by this thing. The discussions have been not so great with the Intl thing and in fact there’s work to be done for PluralRules. I look forward to working with Shane and the integration here.
+
+DE: I want to disagree that Intl is the primary motivation for decimal. I don’t think that’s what you meant to agree with. So in particular, it is similar to other libraries that we’re adding to JavaScript. Temporal doesn’t—Temporal has Intl integration, but iterator helpers don’t, set methods don’t, but are motivated for standard library features because there are things that are generally useful for JavaScript developers. And the particular sort of internal interchange point is a big deal. Jesse has presented in committee that several different JavaScript ecosystem libraries need to present decimal to the users, and kind of choose randomly between the decimal libraries out there. It works together better if we have a built-in decimal library. As far as intl integration, we don’t need additional features for this. We have support for NumberFormat and I believe, PluralRules if you pass in a string for the number, and that can have a decimal and it doesn’t get rounded to a number. If you want something minimal, that is already in place. It’s true that the only place that it actually intersects with another thing is intl. But that does not detract from the benefits of having standard libraries that solves user’s problems.
+
+SFC: I guess I’m next on the queue, yeah, so, I agree with a lot of things that Eemeli just said, and the plural reactions and I think that’s a motivator for me, and that’s one good thing about the committee, we don’t have to be motivated for the same reasons. If this proposal is motivated for different reasons by different delegates, that’s fine. I do like Eemeli sort of following this out.
+
+EAO: Responding somewhat to what Dan said there and the other discussions we’ve had on going, um, I would find it very interesting for us to be able implementation-defined the right solution to the problem & right solution that I would find very interesting to try and work on is finding a way to, um, be able to add libraries to JavaScript, as I—I might be paraphrasing Dan what you just said, but still find a way to have the general solution be available rather than needing to address it specifically repeatedly.
+
+DE: Right, I think we see a cross programming languages that they do have standard libraries and I think that’s a pattern that we should follow. Um, we’ve discussed having built in modules as one potential path to this and that’s rejected by mozilla and chrome, it’s not clear to me what the next steps are, but also there’s a version problem which is that ecosystem libraries tend to have major versions and standards don’t and that version list probability from the standards rigor review like this is a thing that provides real value and real stability overtime and something that this committee should continue to be responsible for. If somebody comes up with a way to do this, blessing libraries, we should do it, but I’m not convinced we should hold about a standard library for this, unless a feature is motivated, which Jesse motivated.
+
+EAO: I just wanted to note that I am interested in us having blessed or built in libraries and I would be interested in working towards that, of course I’m not speaking for Mozilla, but I would be interested in advancing this so that we can have more appropriate discussions and solve the right problems, rather than specific smaller ones.
+
+SFC: And I’ll just chime in to say that, you know, built in modules, um, are something that like, you know, we discussed extensively, um in this committee. A couple of years ago and I was pretty compelled by the arguments and position that we held and I don’t think it’s a good time to revisit that unless somebody wants to make a compelling argument reviewing the discussion of this topic, but I think in absence of that we should take the path that Temporal is taking and is taking and introducing new globals, new standard library features.
+
+DE: Okay. I think, I think it would be good to reopen the discussion, but also not sure what we should do in the interim. Since this is lurking in the background, I don’t think we should stop standard library while continuing to be conservative what we add to the standard library and only adding the things that are broadly useful like this.
+
+CDA: JHD?
+
+JHD: I wanted to echo Shane’s point, but I don’t think a single thing would be different if it was built-in modules versus globals for almost any proposal. I think things still need to be justified and independently motivated and so on, so I agree that’s just a not good use of our time, because I don’t think it will change the calculus of what we’re discussing today.
+
+CDA: JHD?
+
+JDH: So, yea my next item. So, Jesse has done a lot of really good work on this proposal. Waldemar is a very thorough reviewer and I’ve witnessed a lot of back and forth, and commend them both for doing that work. This—my position is no surprise to Jesse, and I’ve communicated with him multiple times and discussed in plenary, the last one or the one before that, the—the current position, or the current reality about adding a new primitive, I’m going to paraphrase from, um, Matrix, is that a primitive only makes sense if it’s actually going to be widely used. Unless someone can come up with that proof, that means no new primitives are possible, so that leaves decimal as well as a few other proposals in a place where they have to either pursue an object form or give up entirely, and that’s a crappy situation for a champion to be in, because they want the thing. When the ideal solution is a primitive and that’s not available, you have to find/come up with some alternative approaches. I still think that number systems are such a conceptually primitive thing that it doesn’t make any sense to me to not have them in JavaScript at least. I’m sure there are languages that people will throw at me that have non-syntactic number systems, but in JavaScript that is how they work, they’re number literals and BigInt with number suffix, potentially. The arguments I have heard for decimal as objects (and to be clear, I very much want decimal as a primitive): one, it’s a coordination point. Fair, that’s true of almost anything we add to the language, and that’s a good benefit. Two, is `Intl` integration which Eemeli brought up, which I had not thought about, but as Dan brought up, it’s an enhancement, but it’s not a brand-new capability. You can still do it with strings and that’s all I have heard. I haven’t heard a performance example, I don’t think I’ve heard a correctness argument either because there are correct big number libraries that are out there. The proposal as written could exist, I mean, there are npm polyfills available, so it could exist as an npm library just matching it and in theory, if it is in fact the best design, it would gain adoption and dominate the space amongst the future users of Decimal. All of them are using a solution already, and they would flock to this better solution if that is in fact the case (that it is better).
+
+So, I’m not trying to argue for primitive specifically here, I’m just saying it doesn’t feel to me like this carries its weight because if it’s not primitives, I don’t see how it will get adoption among people that are already using one of the alternatives—one of the libraries for this purpose. To me, stage 2—we’ve talked about the meta aspects of it, which fair, I actually agree that it meets it requirements for stage 2 -, but stage 2 indicates that we expect the feature to be in the language. Effectively, it means if somebody is not sure it should be in the language, then they should object to stage 2, and that’s the position I find myself in now. So—those are my thoughts.
+
+CDA: Dan.
+
+DE: So, um, I definitely see that it would be nicer if this were a primitive, but Why—why do these arguments apply differently to decimal compared to Temporal? For Temporal it would have been nice to have triple equals and plus or minus to add Iterations to `Temporal`. It also could have been just, you know, in the ecosystem Instead of, um, libraries, so how do those cases relate to each other?
+
+JHD: Yeah, that’s a good point. So, to me, um, almost any value could theoretically benefit from operator overloading. As somebody joked in TDZ the other day, C++ has never seen an operator overload it didn’t like. There’s ergonomic appeal to that, but that’s not what I’m talking about. Dates and Times have always been objects in JavaScript and even if they have operator overloads, they’re generally not a primitive immutable value, and it’s hard to compare to other languages because JavaScript has this distinct primitive and object concept that isn’t exactly the same in many other languages. I don’t know how to word my response in a way that’s convincing, but to me like a number system needs to be a primitive and dates and times don’t, and Temporal very much carries its weight, both because Date is horrific and also because of the alternative libraries out there solving things in very different ways with very different trade-offs—I have been long been convinced that Temporal in some API form is indeed the best solution to the problem. So, yes, I think Temporal carries its weight.
+
+JMN: I appreciate the argument. I think that's certainly worth thinking about. One thing that I might add, and I’m sure you know this just as well, is the fact that we have thousands of users as Dan says of random choice of decimal libraries, shows that there’s considerable demand for this kind of thing. Um, but I also really like the JS world in that there’s, you know—everyday someone is born who is going to start web development doesn’t know about decimal numbers, find some kind of rounding issue, and reaches out for one of these libraries. If we were to add this into the language, even in its current form, which to be clear is also not my ideal form, this is a compromised position, this is not my ideal position is something valuable for the programmers. We would be helping out countless people, presumably, every day if this was in the language. So, I guess, just to, to recap, um, the idea is—we know that there’s a demand for this kind of thing, we also know that, um, people are going to run into these issues. That’s all by itself a kind of a kind of argument for having this available, even in a less than ideal form. Another thing I might add is that we know about the tendency of, um, say, um, of developers to add lots and lots of dependencies and there’s a concern and reducing dependencies is a thing and going through this very robust review process, I think that carries a lot of weight for developers.
+
+CDA: Yea, just to. Sorry, JHD, hang on one second. I want to note that we have about 15 Minutes left. JHD I’ll let you reply and then Jesse. And then I’m going to let the queue continue. And Jesse, take a look at the queue and see if you want to proceed chronologically, or anything to move up in the interest of time. Go ahead, JHD.
+
+JHD: Regarding the no dependency crowd, I agree that people _think_ that they want that, but the main response was, you said it’s clear there’s demand. I actually looked at the proposal and I only see one specific npm library linked, but that one has 30 downloads a week, so I think it would be helpful if you can maybe in Matrix, or somebody could throw me a list of the alternative libraries because what I recall from review presentations, there’s very short list of ones that have usage and their usage is not exceedingly high. Meaning it’s probably critical for the people who need it but that doesn’t mean a lot of people need it.
+
+JMN: Fair point. That’s an oversight. There are indeed these libraries, I’ll add the links. Thank you.
+
+CDA: Okay, Jesse, are we going through the queue in order?
+
+JMN: Yea, why not. I don’t have any good principle why not to.
+
+NRO: So, oh, I was in the champion and like we were the group that was struggling with this primitive constraint, and like they said, primitives are not a trend. Like, we have to actually prove that they’re going to be widely used. And I—as others said, if we add the decimal language and that gets like actual wide adoption in browsers and used in many websites, that’s proof for that. Like, we know—well, like, JHD for example just said well we can get this proof but like, in this committee, pulling out a number from NPM and saying look this library has one million downloads/week and so clearly it’s been used is not received very well in the past so I don’t think that’s a valid proof for that. And also something like Jesse hinted to having something in the language it will improve, the cusp for that for developers can be the discoverability problem. You need to find out that you have to use this specific thing for the language rather than Google it for it and looking for it in NPM.
+
+KM: Yea, so, I guess—you know, as I—I think has been discussed many times, primitives need a lot of justification to add to the language just because of how much they impact every engine, they touch everything, millions of lines of code and so—I think that makes them very burdensome, so you need a very high bar and we talked about that many times. I think this discussion keeps coming up and I think adding operator overloading to JavaScript could—like, feels like it will resolve a lot of these things, JavaScript is probably one of the few dynamic that doesn’t have operator overloading, like—I’m sure plenty of other ones do too. There’s plenty of limitations on how you can do it in those languages, but they exist. And so like, I—I think it’s potentially a design space that we could explore. I think a lot of engines because of the way they handle—they already have to handle the integers of things versus downslope double adding and strings and handle dereferencing the point into the object to figure out if they’re equal or not. So, I think it’s possible, I don’t know if it’s like a guaranteed doable thing, but it doesn’t seem outside the realm. I know there’s push back for various reasons of just like ergonomics and developers understanding operators, but I don’t know, I feel like we have this discussion a lot and this could just be resolved if we had operator overloading.
+
+CDA: Ashley.
+
+ACE: Yea, so, the point about decimal, you know, where is it going to be used, of course the language, so there’s the Intl things, but there’s also the integration vertically with the web, like, a node, so like node APIs that return BigInt and there are web APIs that return BigInt, I can put decimal in and structure decimal across my work, and my work can pick up that decimal, even though it doesn’t have a decimal library off NPM, so I think that, um, again is where having this in the language can add so much value compared to user land and building on that interchangeable, like it’s really, really so valuable, even being an object.
+
+SFC: Um—yea, maybe, um, it looks like Dan can address my question. I would have—I would have liked to have seen to slide, and I feel like there were slides in previous versions talking about operator overload and content that would be relevant for the slide show, maybe if you can address that briefly.
+
+DE: Yea, for operator overloading. We have discussed this in the committee and I brought up the proposal. There’s two ways we could consider this. One is certain objects can have operators overloaded on them, or another one is ways to define primitives. So, we’ve heard from engines that wouldn’t like to do the latter. The former, maybe the Achilles heel of it is developers would like to overload triple equals, but that’s an operator that is quite difficult because it’s a blender for objects and you kind of want that have a comprehensive thing. There was another concern about injecting behavior into unsuspecting programs, which I proposed a solution for with operators from syntax junk that you have to put, which some people found interesting, but also would probably have its run time overhead, but overall if we add operator overloading for objects or decide we’re okay with one-off primitive either one of those—we could go back and add that to decimal in particular, if it’s operator overloading and added to decimal in place and object, the operator is already throw because value overflows, if it’s okay that we’re okay adding a primitive, that’s also okay because decimal 128 are wrappers over the primitive, and again, because value of throws, we make value return that new underlying primitive type and it would be completely coherent and analogous with what we have already, and we have methods for addition, but I don’t think that’s a very costly form of legacy from the transition. So, yeah, I think, I think there is that extension path, um, completely, completely coherent and completely aligned, um, if—we wanted to go that way. But it’s not necessary because the feature is useful in its current form.
+
+ACE: Um, if I may make a suggestion. Oh, there’s a point of order.
+
+CDA: You go.
+
+JMN: I think the idea is that we just don’t have that much time. I see—at least one question in the queue about overloading. Um, if I have permission to do so, I’d like to table that. I think we’ve discussed that many, many times. I don’t think that’s anything we’re going to resolve today. Um.
+
+CDA: All right. I’ll take a look at the topics. Do you want to go to Shane or to—
+
+JMN: Yep, sure, go ahead, Shane.
+
+SFC: This is the performance one, yea, not, I mean, it’s not, if the committee thinks this is motivated, as far as I’m concerned, I think it’s fine. It would make it more compelling for a lot of people on the fence if you could show these performance numbers, even for the transparency, even if they’re not the numbers that you would like to see, like it would still be good for transparency to see what these are. If you have a web assembly 128 versus a Decimal 128, what are the numbers going to be? I don’t know what they are and I’ve never seen what they are. I’ve had that issue open for a year and I haven’t seen any progress on it. But again, not—if it seems like there’s a number of committee numbers, including myself that think this is stage 2 material even without those numbers, but it might help with those on the fence if we had them.
+
+JMN: Yep, I agree. Sorry about that. That’s an open issue on my end. I know about it.
+
+CDA: We have just a few minutes left and we cannot go over because we have—we are burning time. Dan, please.
+
+DE: So, it’s been discussed that we do have overflow slots, in the afternoon, so performance is not the main motivation for this feature. I’m not really aware of applications that are super my performance requirements for this and if we did care about performance we would focus on embedding into arrays. So you mentioned issues that don’t get addressed, please reach out to the champion if you don’t hear anything. It can be hard to get GitHub notifications. So, I don’t think that should be a requirement for stage 2.
+
+JMN: Um, we’re running out of time, although than agenda item to volunteer your time, with your permission I would like to stick with the time as it is. Um, so just to wrap up, we had this discussion, kind of meta discussion, um, but I would then, like to propose conditional stage 2, which means, it stays at stage 1 with the understanding that, um, there will be considerable, um, iterating on the issues, um, we have a long task list from Waldemar and Shane has also, um, there’s also some work to be on the Intl side so that’s on me, and the champions generally to take care of these. So do I have conditional stage 2 in that sense?
+
+CDA: Eemeli is asking if you can, um, explicitly iterate outstanding topics? Um—
+
+JMN: There is the PR with a very long list. It’s coming from a list of issues from Waldemar. I don’t think going through that would be that productive.
+
+EAO: You also mentioned any outstanding topics raised today and this I think needs definition for this to proceed.
+
+JMN: Right. Okay. Uh-huh.
+
+DE: So, maybe you meant the ones that were in your slide deck. Is that what you meant? Because there were other outstanding questions that were not there, but the plan was to stick with the changes in your slide deck. Is that accurate?
+
+JMN: Yes. Does that clarify, Eemeli?
+
+JHD: My point remains, I still don’t think it carries its own weight without primitives. I still want to be convinced. I’m certainly happy to continue discussing it with you or with anyone, but I’m not convinced that it’s worth having this in the language as an object. For the record, I think that it has met the stage 2 qualifications, perhaps conditionally. I agree with Dan that with the description of the coherent path towards adding primitives in the future, but if we don’t end up getting the primitives, I feel like this will not be a beneficial addition to the language.
+
+CDA: DE.
+
+DE: Sorry, is this, are you blocking consensus or is this a nonblocking concern?
+
+JHD: Yes, I’m not providing consensus for this. You could call that a block.
+
+WH: I’d be happy with conditional stage 2 pending spec fixes. I’d sign off on the spec once the spec is working. And that’s something I could do out-of-band between me and the other people working on the proposal.
+
+JMN: Yep, sounds good. Let’s have another video call. That was very helpful last time you and I chatted.
+
+SFC: Yea, my comment basically says what it needs to say. I support stage 2 with these two conditions met: Intl.NumberFormat and Intl.PluralRules having behavior that retains trailing zeros. As I’ve said earlier, I do find this proposal motivated because of the Intl NumberFormat and PluralRules integrations. I think it solves one of the most common foot guns with regard to internationalization of numbers on the web, this parallel that you can pass the same numeric value and you get it correct behavior both ways. To me, I think that’s pretty solid motivation so I hope that the champions can resolve JHD's concerns and we can move this forward.
+
+JMN: So, just to summarize, this stays at stage 1, given the block, that’s my understanding of our discussion, right?
+
+CDA: Yea, JHD, um—sorry, JHD can you just, for brevity and for the notes can you state briefly your—position?
+
+JHD: Sure. I think that for this proposal, a proposal for a number system that does not include primitives does not currently carry its weight to be stage 2. For me since stage 2 represents a pretty clear signal that we’re attempting to move It forward and put it in the language. I’m open to being convinced and I’m happy to work with the champion in the future to resolve the concerns.
+
+RPR: And just to be clear, JHD, you’re saying that the incremental approach is not acceptable. Does it need to go straight to the primitive in the first version?
+
+JHD: I’m not making that statement necessarily that it must go straight to the primitive in the first version. I’m saying that it if the primitive in the first version, I would be convinced it carries its weight, but that’s not the only way I can be convinced and I agree that it’s technically feasible to do the integrative approach, but if we didn’t end up getting all the way there, then I would very much regret it getting stage 2. So I’m going to hold back on it for now.
+
+DE: So, what do you mean that it’s not the only way that, um—or I think LCA stated it well, the only way that you won’t block this proposal to include primitives?
+
+JHD: No, I’m trying to clearly, as clearly as possible state that I’m—I have an open mind here. I’m not, like, affirmatively stating the only path forward is primitives right out of the gate. I’m saying that the only thing I see that at the moment that would make the proposal carry its weight is primitives right out of the gate, but I’m hoping that I can be provided with arguments that convince me of alternatives.
+
+LCA: JHD, what are arguments other than primitives that would, you would consider as valid reason to not block this proposal?
+
+JHD: I don’t know, I haven’t read them yet.
+
+LCA: So if you say that you will not, you will block this proposal—you will not block this proposal if it includes primitives. You will not block if there are other solutions, but you don’t know what these solutions or nobody knows what the solutions are. I feel like you’re still saying that you will block this because of primitives.
+
+JHD: Effectively it means, yes, but the difference is that I’m trying to be very clear that I’m convincible. I’m not being intractable here.
+
+CDA: Yea, I’m going to jump in because I—I think it’s useful discussion, but I don’t think that we’re, that outcome of discussion is going to convince JHD today to, um, not withhold his consensus for stage 2 at this point. You can jump in if I’m wrong at any point, so on the basis of that, um, Jesse, um, would you like to spend a moment to dictate summary and conclusion for the notes?
+
+JMN: Yea, the conclusion is that decimal stays at stage 1, ie, not even conditional stage 2. We discussed some, um, changes in the API, and um, iterations on spec text, and it’s on the champion to keep working with Waldemar and Shane and others as well to improve the spec text and Intl integration and this may come back in the future. Thank you.
+
+CDA: Just note that it’s removed from the queue, but RGN also was not prepared to proceed given the contention on multiple fronts and then, Shane—I can read it off, but Shane did you want to speak to your—note there?
+
+CDA: Yep. SFC says I think it’s clear that the proposal just needs more explicit motivation. I’m convinced but lots of delegates remain unconvinced by motivation.
+
+DE: Can we hear more about RGN’s comment? You said rGN wasn’t convinced?
+
+RGN: Yeah, given the discussion in this room and the substantial contention associated with it, we’re not willing to allow this to advance today.
+
+DE: Can you be more specific? What parts of the contention and what do you mean by, we?
+
+RGN: We being Agoric, and contention being issues around the shape of the spec, the nature of object versus primitive, and the tangent about operator overloading. There seems to be lots of material that needs further refinement and I’m expecting this to come back in a later meeting, but for today it didn’t pass the bar.
+
+DE: Can you—so, the champion presented a plan for, um, you know, a proposal about what would happen with operator overloading. Is this something that Agoric has specific concerns with?
+
+RGN: No.
+
+DE: Okay, great.
+
+CDA: Okay, thank you, Jesse. Thanks everyone. The saga continues. Um, GB. Are you there?
+
+GB: Yes. Just give me one second.
+
+CDA: Sure.
+
+GB: So, I’d just like to request, just a clarification on the timebox are for this. It was originally a 30-minute timebox, and now we’re looking at 23 minutes until lunchtime. Would it be possible to go 7 minutes into lunch?
+
+RPR: In the room, we can take 7 minutes off lunch, yes.
+
+### Speaker's Summary of Key Points
+
+The spec text has been considerably fleshed out, though still not 100% complete after discussion, we asked to advance to stage 2.
+
+However, critical feedback keeps decimal at stage 1: The champion will work with WH, SFC, JHD, and others who expressed critical feedback, which included ensuring that rounding and selection of decimal quanta is properly defined finding evidence that decimal libraries are robustly used in the JS ecosystem, thereby justifying adding decimal to JS
+
+### Conclusion
+
+- Another round of iteration on the spec text is needed.
+- Deeper integration with Intl is needed, in particular, with Intl.PluralRules.
+
+## ESM Phase Imports
+
+Presenter: Guy Bedford (GB)
+
+- [proposal](https://github.com/tc39/proposal-esm-phase-imports)
+- [slides](https://docs.google.com/presentation/d/17uYZ9-pm2Aa2yw1iP8OsOvRwZBWjgjn6Xid5jxEV7ZE/edit?usp=sharing)
+
+GB: I really appreciate it for everyone in the room. In that case, I will share my screen and begin the presentation. Okay, so, this is the proposal for ESM Phase imports. So, it’s following on from the source imports proposal that provides a source phase syntax, and provides an object for that for JavaScript modules. The motivating use case that we’re using for this proposal is worker instantiation. This is the ability to create a new worker from a source. Right now with the existing worker instantiation, you have to resolve the path relative to the current module and you have to create the worker and there’s certain issues with this. It’s not ergonomic. It’s not a static capability so that the ability to for builds to be able to analyze this is limited. The string passed to new worker is not a module specifier, it’s a path so you have to do normalization of the module specifier. There’s limited tooling support for these, um, patterns. In terms of, bundlers and builders to be able to pick up on these relations and handle them in for example, library patterns and things like that which discourages libraries from using workers more widely in JavaScript. And so the idea is similarly that phase imports were able to provide a solution to the ergonomics for Wasm instantiation by using the JavaScript as a representation of that source to be able to instantiate a new worker from the source. And if we have that, we get a more ergonomic worker about instantiation, and it’s part of the static module system and all tool chains can kind of align with that, and at the point where tooling can handle these cases we get something that is also portable and it can be used across different kind of library patterns and things like that potentially. So, that’s the motivating use case for this proposal. There is also a secondary objective around layering and that is the fact that while this is the motivation in reality this is implementing a new primitive for a JavaScript source phase. Sort of these higher order modules in JavaScript that lead to the other things like module expressions, module declarations and loaders that allow virtualization and allow bundling workflows and things like that. It’s very difficult to create a proposal that’s just layering, so that’s why the chosen use case is the workers that allow us, um, that has a direct benefit that we can provide for the proposal itself, but in reality, we’re really, we’re building a primitive that can fulfill these layering, this layering solution. And so just to dig into where this can lead and the real Omni of it, and the other proposals that it leads on to, module expressions was originally motivated by worker use cases to allow inline modules that you can dynamically import and, um, then, um, module declarations extended to this to become a bundling primitive and these are object values that can be passed directly into dynamic import and also support transfer via structure. Here’s an example where a worker is created and that inline module is posted through structured clone into the worker and later on inside of the worker that module is imported.
+
+GB: And so, for this as an ESM source phase proposal that can be imported and be the same object that we can have for the object expression, it would be able to—starting off with this worker use case and then saying that we can do dynamic imported, structured clone, in theory the module expression should be inline module text and should get the features out of the same primitive. So this primitive should be designed from the beginning to support dynamic import and structured clone, I guess is the argument I’m trying to make here. So here’s the dynamic import.
+
+GB: So you should be able to import a module in its source phase and then dynamically – [ Audio Breaking Up ] – and then we have the specs for it. It does not support importing a module source—I just got a message that my connection isn’t stable, if I do cut out or if there’s lack of clarity, please let me know.
+
+CDA: You’re okay for now.
+
+GB: Thanks. So, um, it, this is specified but not for the case of a module source value coming from a separate realm and I’ll go into more detail in a second but I want to discuss the design in the phasing model. The resolver module and you get the module key and the module map, and originally it was just a URL and now days it’s the URL and the attributes and I added one other thing here as well to the module key as a placeholder when we have module expressions which is that if you have an inline module you probably want some extra unique ID associated with that module expression, especially universal because you want to pass that module expression around between different realms and if we had all of those things that forms part of the key as well. Imagine this key, right now it’s not specified anywhere, it’s not really an explicit thing, but implicitly in the model you can think about it as existing here. The fetch compile stage is fetching the source text and passing the module and having a sort of module record that points to its original key still. It’s got the source text and the module source text, as soon as it’s a value, it has a realm associated with it, but you could have more general thing that is not associated with it and then you have the link evaluate phase, and the module instance, and the module instance is also associated with the module key, so you’ve got the source and instance in the module key, but this instance is more of a canonical instance, where it’s the default instance on the default host key and when we allow virtualization, our module key could have multiple instances, but it’s also keyed into the instance as a standard canonical host, host linking module, single instance, and it gets evaluated and has evaluation state.
+
+GB: So, with that in mind, when we dynamically import a module source, as the layering primitive, module declarations, what we can do is think about this conChip steal, when I statically import that and pass the dynamic import, it says, what is the key for that module source, let’s go into the key and see if there’s a canonical instance in the key, and if there’s not a canonical instance in the key, and recurrently link that key with the host linker and if we’re looking for the instance phase, which we’re looking for—drive its execution to completion. And so this is, this is currently fully specified in the spec text or the proposal.
+
+GB: And then, we also get the ability to defer these, if you want. So, this would statically, this would import the module source, but none of its dependencies in the first line so you just get that compiled source record and then the defer acts—if the module source is acting as a capability for its key and that you could then defer it and get both the linked version of the instance without the execution. And this kind of degenerate case works out as a consequence, which is if you do a source input of the source, you’ll get the same source back. Obviously, only within the same realm. And this is implemented in the spec text because right now if you import a source and create an iframe and you pass the source through the iframe to a dynamic import and it sees that the source value that you provided to the import doesn’t come from the same realm, it will throw an error right now in the spec text. We could make this a—realm error with the compartment level boundaries, it would be at the compartment level. We could even disable this case entirely if we wanted to, but I’m not sure that we even need to, but you’ll get a little bit, um more into the realm question shortly.
+
+GB: But I also just wanted to mention this other Wasm module imports being supported so that now talk as well, about using the WASM source object as a capability for its key in the module register, so you can get the canonical instance. And then what about a user created module object? So, if it’s not, so, if we got the WASM source through the source import, certainly we know the key, but if I just create an arbitrary `WebAssembly.Module` object, how do we know what the key is for it? The easiest solution there, if it was just created in line, it has no key and we just throw it, and that’s the default behavior right now, but we could also fully support this and basically saying that it has the same semantics in JavaScript so when you create a new instance it gets some kind of unique ID—sorry, when you create a new module to compile streaming it associates some unique ID with it in the same when we had the unique ID for the inline module key. These are across specifications and we can’t define it in ECMA262.
+
+GB: First it’s the across realm situations and if you pass it across an iframe boundary, you get a source imported it will throw an error, but we could potentially relax this case, um, allowing modules to still—basically, treating keys as shared between realms and so if you pass a module source, you can treat it as a key in any realm. And this is something we’d like to explore in stage 2, as a requirement of stage 2.7. To determine if that’s something that’s worth considering.
+
+GB: Furthermore, there might be some spec refactoring and to factor out at the moment in the specification there’s no concept of the key and there’s no concept of a source independent of its instance and there may even be some—so, right now we’ve replicated the semantics fully, but without the cross realm, and there might be a semantics that allows us to form the compile record without the instance or even without a key, and this is something as well that we would like to explore as part of a stage 2 process, and even an editorial PR for ECMA262, but to explore the cross specification and refactoring, it would help to say that what we have right now is correct according to the current specifications and we can explore the larger amount of work to the spec refactoring and the cross specification work under stage 2.
+
+GB: There’s one other use case that is also associated with this proposal and that’s the—the two use cases here is analysis tooling with analyzes and iterates module graphs and bundler all right, and module dependencies and recursively fetch them and then the other use case is wrapper module construction. So, the ability to create a module which has the same exports as a module, but with instrumentation around the exports for common use for mocking and performance analysis or something like that and both of these use cases can be solved with some very basic analysis functions on the module source.
+
+GB: This is not part of the motivating use case, but we have specified these functions as well as part of our proposal and so this is an imports function on the abstract module source, as well as a named exports and a star exports function. Import – [ Audio Breaking Up ] –
+
+USA: It seems like GB dropped out. Let’s give it a few minutes. I can see the mouse moving. Ah, is that on our side.
+
+CDA: It looks like GB dropped.
+
+USA: Yea. He mentioned that might happen. I assume that—he’s going to rejoin. Maybe ping GB.
+
+CDA: Let’s give it a minute. We only have 5 minutes—well we were going to go over by 5-7 minutes to allow for the topic, but failing that—let’s give it a couple of minutes.
+
+USA: By any chance does anybody have a better reason to reach GB than Matrix? Better as in—more reliable?
+
+NRO: I looked in the queue, I can’t speak for the motivations proposal, but if anybody has questions about the semantics, I would be very happy to answer them.
+
+MF: I was wondering how wasm compile time imports works with this proposal. Is there a plan for how those might be passed? Import attributes or something?
+
+GB: Hi, I’m sorry about that. I’m going to join from Italy and we’re currently on vacation and apparently the Wi-Fi is not very stable. Are we still on the topic or—was the decision made to defer for lunch? Chris?
+
+CDA: So, on the topic—still on the topic as far as I’m aware.
+
+GB: I can pull up my slides again. Just a second.
+
+USA: I could see your screen, and then it’s gone. Now. Okay. Perfect.
+
+GB: Great. So, we’ve got these functions which can be used for the dependency analysis. And with named exports and star exports, the union of the local name and star exports and which returns the same type as an import, you can union all of those as local exports and you get the total list of exports and you can use that to construct a wrapper module. There was discussion about the star exports, naming of this function. I’ve got a PR up to rename this to wildcard exports. And I didn’t want to land a PR in the last week, before the meeting but my time is to land that after this presentation so if there’s any further thoughts on that, it would be very welcome. The term star is used internally in the spec text, but there’s no way that it’s used in the public APIs. I haven’t seen star used anywhere else to reference these exports. The term barrel file is used, and on MDN, there is the term, wildcard, but I haven’t actually seen it as a strict name for these export star statements. starReExports kind of conflates with export name from the—sorry, it’s star reexports was the original name. But wildcard exports is the final proposal. If there’s any feedback, that’s welcome.
+
+GB: Then there’s two other module metadatas that we can go, top level and—which provide extra bits about the module. Top level await is important for check for modules, are async required for 1, specifically, would check this bit for other modules and throw if it is set. We did specify dynamic import, but there’s a PR to remove this. Again, I didn’t want to land it the week before the meeting. The reason being that you can, the dynamic import inside an eval expression and that will work correctly and contextually, therefore dynamic import is not conclusive as part of the module static analysis.
+
+GB: Okay. So, the summary is, we’re looking to get stage 2 for the complete proposal for the text covering the JS module and the support and dynamic import and analysis. The cross specifications of the interactive specifications for HTML has been not been defined and the cross realm case currently throws. We would like to explore this further within the Stage 2 process, including investigating if there’s a possible editorial spec refactoring that we can do for the proposal. And those would all be things that we would clarify and confirm for stage 2.7 and then the two PRs that are currently unmerged is dynamic import and renaming star exports to wildcard exports.
+
+GB: That’s it. We have 6 minutes for discussion. So, if someone can run the queue because I can’t currently see it.
+
+USA: Yea, I can do it for you. First we have Richard.
+
+RGN: So, meta commentary that I really appreciate the specific identification of issues to resolve before stage 2.7, and I think that would be a good pattern to carry forward. I’m assuming that there is going to be explicit documentation of them in the explainer? Or you know, somewhere recoverable outside of meeting notes?
+
+GB: I can do a follow-up to add some specific dig into these and read them further, as part of the stage 2.
+
+RGN: Great. Thanks.
+
+USA: Okay. Well, that was it for the queue, Guy.
+
+GB: I can speak very briefly to Luka’s point that web assembly, there’s the concept of compile time imports, so that the moment the strings built in specification is defining imports which are, um, you can think—maybe because web assembly doesn’t have a concept of a global, it can’t just access global.whatever the value is, um, and so compile time imports can satisfy that in a sense, you can think of JavaScript modules as having access to these things, it’s like a hard linkage irrespective of the actual graph so it provides special names to know what these features are and string built ins so that they’re sort of, they’re not part of the normal imports, they’re kind of special contextual imports for the host specifically and they’re not built in imports in the sense that they’re like, more like global access. You can make them more like accessing the window object. Those would be, um, supported for the source phase. JavaScript doesn’t have any sense of that so it doesn’t apply to JavaScript.
+
+USA: Next in the queue is Dan.
+
+DE: So, I think this proposal is great in the way that it’s framed or layered will help with module declarations and module expressions because they can be the same kind of object. I was initially not expecting there to be these particular introspection methods, but I think they’re well needed for source module tools for wrappers. I have trouble understanding how they can be applied in ESM, but I can understand how they’re valuable for wrapper, if you have a loader, but otherwise given to that you can’t reexport the thing stately, I’m not sure how you’re supposed to use that in a pure ESM context, but the other contexts are important enough to justify this.
+
+GB: The real world use cases are in the bundlers themselves and tooling that is part of, you know, some kind of host level loader system, sort of bundlers, etc., and sort of, anywhere es module lexer is used, which is a library, which is fairly well-used in the ecosystem, it would replace a lot of those, so the benefit is there for tooling and it’s a really nice helper, but yea, it is very much a secondary things that we can provide as a nice to have here. We could remove it, as well, if anyone has any concerns.
+
+DE: Yea, these things seem quite simple and also quite useful for the use cases so I’m fine with them landing, but I also wouldn’t be opposed if somebody wanted to break them out into a separate proposal.
+
+USA: Okay, that went away. We’re low on time so please make it 10-15 seconds, luca.
+
+LCA: I want to explicitly support going to stage 2 for this. I’m very excited for this proposal. I think making easier to use workers have a great improvement overall, and there’s a lot of libraries that don’t do this because tooling doesn’t support it well so I’m very excited to see this, and hope we can use it soon.
+
+USA: Next we have RGN.
+
+RGN: Reification of import phases is broadly useful; I like the overall picture for module harmony and the narrow scope for proposals that constitute it. This is great for the kind of tooling that already exists and will support new varieties as well. Enthusiastic support for stage 2.
+
+USA: Dan minor expresses support for stage 2. Next is NRO.
+
+NRO: I have support for this proposal and I’m pleased that you are given to motivate this independent piece. This will help a lot with [ Indiscernible ] and declarations and expressions because they become syntax for existing new thing.
+
+USA: Next we have support from Dan and next we have CM. Support from CM as well. So, it sounds like you have overwhelming explicit support, GB. And, do you want to give comments? Congratulations.
+
+GB: Thank you very much. I’ll let you get to lunch and I’ll do a follow-up PR to the readme on stage 2.7 process going forward.
+
+USA: Yea, and would you like to do a conclusion while we break for lunch?
+
+GB: That was my conclusion.
+
+USA: Okay.
+
+USA: NRO?
+
+NRO: Yea, given that there have been some changes with the module proposals is the committee interested in hearing another presentation of them, what’s the overall pictures that we have in the modules group? I see people nodding in the room, so we will try to schedule something for one of the next meetings.
+
+All right, thank you, Guy, and thank you everyone for putting up with this minor delay. Let’s break for lunch. See you at the top of the hour.
+
+### Conclusion
+
+- Proposal advanced to Stage 2.
+- Stage 2 behaviors specified include dynamic import of a source and the module source object and its source analysis.
+- Importing a module source from another realm currently throws, relaxing this behavior is being explored further as part of Stage 2, including a possible upstream spec refactoring for compiled module records and/or module keying. Cross-specification work is also being explored as part of Stage 2.
+- GB will do a follow-up PR to the readme on the Stage 2.7 process and progress going forward.
+
+## Intl.DurationFormat Stage 3 update and normative PRs
+
+Presenter: Ben Allen (BAN)
+
+- [proposal](https://github.com/tc39/proposal-intl-duration-format)
+- [slides](https://notes.igalia.com/p/pj5uX_5nC#/)
+- [PR](https://github.com/tc39/proposal-intl-duration-format/pull/198/files)
+
+BAN: Okay, so, this is the update and one normative PR. As I stated, while I was fumbling to put my slides up, this is very much not a 30-minute update, it’s closer to 5 minutes. We have one normative PR, but it’s very, very small. Our current status, we’re tantalizingly close to asking for Stage 4, but we have one small normative PR and we’re adding testing for recent normative PRs. We have some editorial work in progress, basically the most straightforward, readable way to have part of the spec shouldn’t be implemented the way it is, and although spec is more concerned with readability than implementation, we have a refactor to make it less annoying for implementers to handle a part of the spec where the most straightforward way to implement it would be extremely inefficient, and implementing a different way from the spec makes it hard to keep the implementation and spec synced.
+
+We do have one small normative PR to handle an edge case. It improves the formatting of very, very long components durations in the digit clock styles that are meant to represent the numeric durations as if they were on digit clock, so “numeric” and “2-digit”. In `DurationFormat`, people can use large values for minute and seconds, and these were formatted in grouping separators. The change in the PR, which has been approved by TG 2, Here’s an example: if we were formatting a duration that included a number of seconds in the millions, that would be represented in our digital clock form with grouping separators, which badly breaks the “digital clock” metaphor. I’ll just go to the PR. There we go. That is visible. Its fairly small and has approved from TG2 and if we’re dealing with one of the clock styles we turn off grouping so instead if we’re formatting one of our digital clock styles we turn off grouping so that, um, a number of seconds, like 1 million seconds would be represented as 1000000 seconds instead of using grouping separators, as something like 1,000,000. That’s the one normative change that we have. What is the process here? Do I formally ask for consensus for this change?
+
+USA: Yes.
+
+BAN: I would like to formally ask for a consensus for this normative change.
+
+USA: We have DLM on the queue in support for the normative change. I think—but, okay. But we had DLM. Also with—um, support. And SFC. If any of you would like to speak to that, feel free.
+
+SFC: Thanks as usual to Anba for finding these issues and reporting them.
+
+### Conclusion
+
+USA: All right, you have consensus. Fantastic.
+
+## Continuation from previous meeting: Explicit Resource Management Normative Updates and Needs Consensus PRs
+
+Presenter: Ron Buckton (RBN)
+
+- [proposal](https://github.com/tc39/proposal-explicit-resource-management)
+- [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkqpkI6V9_w6ykvsG1w?e=ehAC64)
+
+RBN: This is a brief continuation from the April plenary. We were discussing deterministic collapse of Await, specifically PR #219 on the Explicit Resource Management repository. There was a question that NRO had, wanted to review the PR before, making a determination whether or not he was comfortable taking the change and after some discussion and some minor tweaks to the algorithm, he approved, but I wanted to make sure that he had a chance to chime in if he has questions and wanted to revisit if we can get consensus on this specific change. I can go into more details on it as well.
+
+NRO: Yea, I don’t remember if I had a request, but I took a look again and it looked good to me, so thank you for this.
+
+RBN: All right so I’ll briefly discuss what this was again. Basically, when an `await using` declaration contains a declaration whose value is initialized to `null` or `undefined`, it is a mechanism of both the `using` and `await using` declarations that `null` and `undefined` values are ignored rather than trying to get a `Symbol.disposed` property off of them and then throwing if it doesn’t exist, to better support conditional resource allocation. In the specific case of `await using` declarations, when you take an `await using` that is initialized to `null`, we still want to have at least one Await that occurs to meet the specific requirements that had been set forth by MM. As you can see in the example here, in the current specification text this would result in an Await occurring three times, even for cases where it’s really not necessary. So, for the X and the Y values, there is no real reason to Await. The important bit was that an Await occurred before the block exits so that the code executes after the block exits runs in a separate turn. These extra awaits are somewhat unnecessary so with this change we collapse all `null` and `undefined` initialized values to a single Await and if there are any non-null/non-undefined values that are also being monitored by an `await using`, then no extra Await is added. This collapse occurs regardless whether X, Y, and Z are initialized in individual statements or a single combined statement. Either way, they’re treated the same and all this really does is reduce the number of unnecessary empty Awaits added to the task queue. My question is, do we have consensus on this specific change?
+
+USA: Let’s wait a little bit for the queue. MF. Do you want to speak to that? Okay, mF expresses support. Also RGN and CM. And LCA. So, yeah, all supportive. Um, pax also would support.
+
+RBN: Is there anyone opposed to this?
+
+USA: Not in the queue. Not so far.
+
+RBN: All right. Then I’ll take that as consensus?
+
+USA: Yea.
+
+RBN: And that’s it for this specific topic.
+
+USA: Okay, thank you, Ron. And I don’t know if it’s the post lunch, um, laziness or if we’re making really good progress, um, but either way, we’re blazing through, so let’s keep going.
+
+### Speaker's Summary of Key Points
+
+- Existing requirement is that an ‘await using’ declaration must Await at least once during disposal when execution exits the block.
+- Both ‘using’ and ‘await using’ allow null/undefined values.
+- Every ‘await using’ for a null/undefined value introduces an independent Await
+- Proposes collapsing extraneous Awaits for null/undefined resources to a single Await, or to avoid the Await entirely if there is also a non-null/undefined async resource.
+- PR #219 was awaiting review from NRO, who has since approved.
+
+### Conclusion
+
+- Consensus on PR #219
+
+## Discard Bindings update or stage 2
+
+Presenter: Ron Buckton (RBN)
+
+- [proposal](https://tc39.es/proposal-discard-binding/)
+- [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkrFz0j1_3aLYU4vABg?e=YMk3IB)
+
+RBN: In case there’s anyone who hasn’t met me, my name is Ron Buckton, with Microsoft, specifically with the TypeScript team. I’m briefly going to discuss the discard bindings proposal which was last discussed in the April plenary. This was a very late addition to the agenda, though I’ve discussed beforehand that I wanted to bring this back for discussion as we had a simple blocking concern when we requested Stage 2 advancement at the last meeting. I was waiting on adding this to the agenda until the review was completed for that specific change so I wanted to go back through this now that that specific concern has been addressed.
+
+RBN: If anyone is not familiar, the idea with discard bindings is that they are a way to have an unnamed placeholder for a variable binding that allows you to elide variable names or binding names in certain contexts such as `using` declarations. This was originally a feature of that proposal, but was pulled as it has a larger cross-cutting set of concerns including pattern matching, which also needs discards. There are some other very useful places it could be used, such as using the `void` keyword in place of a binding identifier in function method parameters. C++ can have unnamed parameters and both C# and Java also use underscore to act as a discard in pattern matching and in other cases of bindings as well. We so far have decided not to use underscore for historical reasons. We generally avoided giving an identifier a different meaning in expression contexts. The motivations for this proposal were a need to have a declaration that produces side effects without introducing variable bindings to avoid the need for “disable-line” comments for ESLint. Existing solutions aren’t really consistent. We have single purpose things like Elision or bind-less catch, but there’s no general-purpose solution. Empty object a patterns are not viable, especially in `using` declarations because they can be initialized to null/undefined and would throw. Simple elision is not sufficient because `using =` is already valid JavaScript so we have to have something that indicates the binding. So, this proposal was to use the `void` keyword in the place where you would otherwise expect an identifier in an expression pattern, in a binding element. In a parameter you could specify `void` to skip over a parameter you don’t intend to satisfy to avoid having to give something a name that you’re not going to use and avoid needing an underscore prefix or mark with a comment to avoid linter warnings. It’s also extremely useful in pattern matching where you want to match that an object has the properties X and Y without necessarily needing to determine what the values are.
+
+RBN: When this was brought up for Stage 2 advancement at the previous plenary there was a conflict with the cover grammar for the `await using` declaration. This was brought to our attention by wH and was, as far as I know, the sole blocking concern for advancement at the time. The issue was the cover for the `void` expression in UnaryExpression conflicted with the cover for `await using` declarations because they both required a cover in the unary expression case. The resulting covers overlapped in a way that would make parsing ambiguous. The solution that I proposed and was approved by Waldemar on Tuesday, was to take the `void` cover out of UnaryExpression and move it to more specific cases where it’s used, which are in ElementList for ArrayLiteral, and PropertyDefinition for ObjectLiterals. The change that’s proposed in PR #9 is aligned with things like the cover grammar that we use in CoverInitializedName in object literals. In addition, we already have this mechanism in object literals where we use the object literal syntax as a cover grammar for object assignment patterns and have notes explaining that when it’s used as an expression that certain parts of the cover grammar are not legal. This extends that mechanism to array literals so we can use them as a cover grammar, and by using this we avoid the ambiguity with how the `await using` is parsed.
+
+RBN: As we discussed in the last plenary, Java has now also adopted the underscore character as a discard, joining most other languages that have this feature. I had asked at the time if we should consider underscore instead. The way that I proposed us looking into doing this was to extend cases that we already support, or that are currently errors. You can already repeat underscore in variable declarations, and you can also repeat underscore in parameter lists in a function in non-strict mode.
+
+RBN: Repeating underscore in lexical declarations today is an error. What we could do is relax that error, but still error if you try to reference the duplicate in an expression. As I recall from the last meeting, there was some push back from this. It’s still an area I would like to explore as an alternative, but `void` is something I know will work, we already have done some validation of that syntax and what I’m looking to do is today, if possible, potentially advance this proposal to Stage 2 and consider further consider the implications of using underscore and potentially doing a large switch over to using underscore, if that’s something that we find is viable long-term.
+
+RBN: I’m seeking advancement to Stage 2. This was a late addition to the agenda, so, anyone is welcome to block purely based on the fact that there was potentially insufficient time to review. My hope is that’s not going to be a major concern because this was brought up for Stage 2 at the previous meeting and again there was at the time only one blocking concern, which was around the grammar so at this point I would like to go to the queue and potentially ask for advancement.
+
+USA: Yea, so before we go to the queue, there are 6 items in the queue at the moment in 5 minutes, although we are running ahead so you can ask for an extension later, maybe, but let’s hope it doesn’t come to that. First we have DLM.
+
+DLM: Um, yea, so I guess, I have a question. Um, for me, the real motivating use case for this is pattern matching so my question is—let me take a step back, yes in general, I think we should be trying to solve problems in general, but in this case, especially consistency for other patterns, I think having underscore would be very, very nice so would I like to ask if we gave up on trying to solve this in general case would be able to use underscore specifically in pattern matching.
+
+RBN: My intuition for that is “no”, and the reason that it’s “no” is the reason that we have had push back in the past against using an identifier for something other than a regular identifier in a place where an identifier would otherwise be an identifier reference. If we could use it in pattern matching, we could use it anywhere. The fact that we can’t use it in most places also applies to why we wouldn’t be able to use it in pattern matching.
+
+USA: We have a reply by LCA.
+
+LCA: I also just want to push back on your idea, DLM, that is useful in pattern matching. I think there’s useful cases outed is of pattern matching, using and await using– await using, the clear resources that exist to have AI, and mutex guards and in that case, having a discard binding is also very interesting. So, yeah.
+
+DLM: Yea, I wasn’t saying that pattern matching was the only relevant use case, I just think that it is, I think it’s the one where it would be really nice to have for consistency, and I would like to point out that explicit resourcing and we could put it in there, although I’m happy with Ron’s answer to my question.
+
+RBN: If I add on to that, for the same reason we thought about using underscore for pipeline, and there was push back in using that in those cases even though it’s syntactically scoped to pipeline, and it’s only in a unique syntax and the same pushback occurred there as well. Even though it’s unique in pattern matching isn’t a specific concern.
+
+USA: Okay, next we have NRO.
+
+NRO: Yea, so, I’m—I’m happy to see that the discussion to use underscore is still in discussion for Stage 2. While I think the proposal has value, even if we just go with void, um, I think it’s a mistake to go with void instead of underscore. Yea, so thanks for keeping it in scope.
+
+RBN: Yea, I’d like to still pursue the potential of using underscore, but regardless whether we use void or underscore, the rest of the proposal I think is going to be unchanged so it’s something we can still, um, kind of workshop a little bit and talk with different folks and see if we can address concerns, and maybe make a syntax switch in Stage 2, but I think it’s the right direction now because void works as a fallback so it’s a good way to keep the proposal moving, stay relevant and having progress underneath it.
+
+USA: Let’s extend the timebox by 5 minutes and then next in the queue we have RGN.
+
+RGN: I still like this proposal. I think it fills a very real need and I’m happy to see progress being made. I also still like void specifically. I think that it’s playing with the hand that we’ve been dealt in terms of what already exists within the language. But even if it ends up being underscore, I would still support this.
+
+USA: Next is LCA.
+
+LCA: Yea, um, I want to echo all of that. I think it’s—well, not all of it, but I want to echo the + 1 for Stage 2, I would very much prefer the underscore. I think there’s a lot await in the precedence in other languages, especially considering that even a language like java that has been around for a very long time, that uses underscore, and also the fact that many developers also use underscore as Java as a prefix for other identifiers to mean this is already used so there’s precedence in the community for this. So, I would very much like to see the further investigation on the underscore and I’m happy you’re continuing with that.
+
+USA: We have a reply by Duncan.
+
+DMM: Um, so—so, in Java, underscore could be adopted, but it could be adopted because of a well defined route to do so. So, it has been a legal identifier. It was made illegal as an identifier in Java with source versions greater than a certain value, but then it could be reintroduced to be the drop binding later. Since JavaScript doesn’t have that sort of source, um, version at different—I’m able to say—keep binary compatibility while highlighting source problems on the recalculation, I don’t think you can use Java on how that adoption can happen in JavaScript—in ECMAScript, sorry.
+
+RBN: I can concur. In this example I’m not using Java in that how Java can do it, but more that it’s one more, um, one more example of underscore being used for this purpose in other languages. In the underscore is especially tricky in JavaScript because there’s literally a package that is heavily used in the ecosystem called, underscore that often uses the underscore character as the import or as the global script reference that people use, therefore it’s really hard to do anything with underscore because the JavaScript community gave if meaning, although it’s funny that in underscore and lodash and also in, FP libraries like, um,—excuse me, like, ram daw (?) and others, the underscore character can actually be used as, um, in some of these ways to ignore things or it’s used as placeholders in other cases. So, it’s—it has a little bit of a messy history, but unfortunately very heavily used .
+
+That’s why if we could make it work, the only way that I could see it really working is if it’s only legal when it’s used in the declaration that cases where it would have been illegal to use it as an expression because it would already have been illegal in the declaration, we would only weaken it a little bit, and allow the underscore to be declared, but allow it to reference because the code is illegal anyway. So that’s kind of a way to make it work in the future. If we want to discuss this more in Stage 2, we don’t have to necessarily discuss it and spend a lot of time on it today, but there’s more that we can investigate here as well.
+
+USA: So, next we have a reply by LCA.
+
+LCA: I don’t want to take up more time on this, but I wanted to reply saying that I didn’t mean that Java is exactly the path we can follow, but Java had some complexities in doing this. They had to deprecate it syntax first and reintroduce it later and they still went with underscore, and I think that shows that there’s value in using underscore. The precedence in other languages means that we should probably consider doing this, even if it’s more complicated than using void.
+
+RBN: Essentially it’s worth the cost to find a way to make it work, at least has been for the languages that have paid the cost to make it work.
+
+LCA: That’s right.
+
+USA: Ron, just FYI, we can do maybe one more extension for 5 minutes, but—
+
+RBN: Okay. I’d like if we could extend this for five more minutes because I would like to see if I can get to the first if not second topic.
+
+USA: Okay, WH.
+
+WH: Thank you for fixing the grammar. Before the fix lexing was impossible after `void`. I support this for Stage 2, but I also have a preference for `_`, and I would not want both `void` and `_` to be discards. I think we should pick one or the other.
+
+RBN: That’s mostly my perspective as well. I saw MF had a comment that void is specifically better—or strictly more reliable because there are some weird cases such as the example that I have in current where bar underscore equals something and underscore equals something else and it’s not exactly working—not exactly doing what you expect. The other issue is that if we did underscore we wouldn’t be able to use it in assignment patterns. There’s a caveat that I don’t have listed in here, you would be overriding it. So, there are some limitations if we want to use underscore, but there’s a lot more that we have to dig into that, so which is why I’m postponing that to Stage 2 as we dig further into it.
+
+USA: Then we have SFC.
+
+SFC: Um, yea. So, void, um, when I look at the examples on the slides and in the repository earlier, the void means something very specific, and other programming languages like C++, where void means the none type or the absence of value type, like, it means the unit type. Like it actually has, it actually is a type for things and like when I read, you know, void, I read it as X is type void, it’s a unit variable. So, I think that like, you know, this is a problem that a lot of other programming languages have solved and I think that the educational benefit of using the syntax that every other language has figured out how to use in some way is extremely important, and void is just simply not—it’s complete—carries baggage that I think would be harmful, um, to use and readability of the code. MF had a topic that he deleted, which seems like it could be interesting which is like, if underscore cannot be used in every context, maybe there’s opportunities to like in context where it really can’t be used, you know, you can choose the other keyword. You can use void or underscore in certain contexts. Like if you need to, but overall, um, yea, I don’t seem convinced.
+
+RBN: Well, I’ll say a couple of things to that. I’m aware of that baggage, void specific meaning in other languages. The takeaway that I have is that there’s basically two ways that void is used in JavaScript today. You have the void expression, which is generally has the concept of “execute the expression and then discard the result”. So, there’s a rough correlation between discarding the value and discarding the binding that I’m trying to carry on through this. The other is the void type in TypeScript, really is the only other place it’s used in JavaScript, and that’s not exactly JavaScript, but close of for this case, and it’s not a unit type, it’s more of a discard type. In many cases, it means undefined, at least when you’re defining a function that has—can’t have a return, you return void, but it also has meanings in how, um, what things can be passed to a call back that runs void. You can pass things that return `number` to a callback parameter that only expects to return `void`, because the expectation is that you won’t do anything with the result. It’s not the same as the unit type. There is a break in other languages, but it might be a cost that we have to pay. Underscore has some complexity around getting to use it, and it has some limitations that might not make is viable. Most other things that we don’t have another keyword that makes sense to use here, so we can’t use an identifier and most symbols and tokens that we might use have meanings that don’t match or wouldn’t work with using, because it would turn using into a compound assignment or function call or something else complicated. So, we have a limited set of things we can put in this case. So we have underscore with its limitations or void. So, it’s kind of having to compromise and do the best with what we have in my opinion.
+
+USA: In the queue first we have DLM with support to what SFC said and then we have mF.
+
+MF: I want to respect precedent from other languages because many programmers have context in other languages, but I think when we already have precedent in JavaScript, we should respect that more, and we do have this precedent in JavaScript. RBN mentioned how TypeScript uses void, but we have the void operator in JavaScript, which literally means discard this thing. We take an expression and then we don’t use it anymore. And JavaScript programmers with familiarity with that will see the similarity between these features. They seem to work nicely with each other. I don’t think that the precedent from C++ is more important than that.
+
+RBN: Essentially we’ve already broken precedent with those languages with how we already use `void`. To paraphrase. I hope that makes sense.
+
+USA: SFC is on the queue next.
+
+SFC: Yea, on that particular topic, I’m trying to recall—I’m, um, it will be interesting how widely, like, how widely used the current void keyword is. I don’t recall reading code at any point recent that uses that keyword, that’s why void might work in this spot, but I would venture to guess that the majority of JavaScript developers are probably not familiar with that syntax already, even though it exists in a certain way. I’m also thinking like, you know, I to feel like the choice of syntax is fundamental to this proposal. The underscore versus void. There’s a lot of big open questions here. This proposal was first brought up in the last plenary and was going for advancement here to Stage 2, it seems very rushed.
+
+RBN: It was not brought up in the last plenary. It was Stage 1 in more than one plenary, so it’s part of the declarations proposal, and it’s been around for two years before it became its own independent proposal. I apologize for the interruption, I just wanted to clarify.
+
+SFC: It just feels rushed to like, um—I certainly had not seen these slides before this morning so like it feels rushed. If you want to go—like, I would feel much more warm and fuzzy if we went to Stage 2 having already established, is it void or underscore and here’s the pros and cons and here’s why we’re going to use void. I feel like having that question answered later is sort of fundamental to the proposal. It seems like ideally something that we would be agreeing on right now, but basically I’m asking for Stage 2 with this question unresolved. Is it going to be void or underscore. I guess, what you’re asking for is the problem motivated, but it seems like a weak Stage 2. I won’t block Stage 2, but it seems like a weak Stage 2.
+
+RBN: I wanted to point out, that the slides today are the same from the April plenary, the only addition is the slide about the cover grammar. We had been discussing this prior to today and the main reason why I rushed this in today was an attempt to show that the one blocking concern that we had in the April plenary was the cover grammar so I was hoping that by showing that we resolved the cover grammar that we might be able to advance. We had already, in the April plenary discussed void versus underscore, that there’s some more discussion that we need to do there, but it didn’t seem that was a blocking concern for Stage 2, which is why I presented it today as something we can keep discussing in Stage 2 as an alternative or a change we might want to make, but the general outline of the feature and what we want to do is fairly consistent, regardless of which one we choose.
+
+SFC: Yea, I regret not being more active in earlier conversations here, but—that’s by, like if you want to go to Stage 2, I’m not in a position to say because I haven’t been an active participant in these conversations to this point. It still seems premature, but I’m not going to block Stage 2 because of that.
+
+RBN: I did point out, and I wanted to clarify in the beginning, if you didn’t have enough time to review because this was added late, you’re welcome to object purely on this basis and it will wait for the next meeting. This was more in the interest of expediency, to keep the ball rolling, but if you have concerns and want to block on that basis, that’s perfectly fine.
+
+USA: All right, we have gone over time, but we’ve gone through the queue. Um, would you—
+
+RBN: I’ll at this point and if we, if we have consensus for advancement to Stage 2? So, is there anyone in support?
+
+USA: There’s support from RGN in the queue. Just to clarify, um, SFC would you withhold consensus?
+
+SFC: I said I’m not going to withhold consensus. I just made a comment that it feels premature.
+
+USA: Okay, thank you. Also in the queue we see a lot of support (RGN, NRO, DE, CM, LCA, MF). A lot of expressed support.
+
+RBN: Thank you, and just in case there’s anyone aside from SFC that would like to express concern, is there anyone opposed to advancement?
+
+USA: Okay. Congratulations on stage 2 and let’s move on. Would you like to, um, record a conclusion and summary of key points in the notes?
+
+### Speaker's Summary of Key Points
+
+- Proposed for Stage 2 Advancement in April. WH raised a blocking concern related to the cover grammar (issue #5 in proposal repo).
+- Cover grammar issue resolved in PR #9, approved by WH.
+- Underscore is still under consideration as an alternative to ‘void’, but has complications.
+- Postponing resolution of underscore vs. void until Stage 2.
+- Seeking advancement to Stage 2.
+
+### Conclusion
+
+- Consensus on advancement to Stage 2.
+- Many delegates expressed a preference for underscore.
+- Some delegates maintain strong preference for ‘void’.
+- Concern raised that utilizing ‘void’ for discards breaks with other languages' use of ‘void’ as a unit type.
+- JS’s existing ‘void’ operator already breaks that parallel.
+
+## Algorithms for Signals
+
+Presenter: Daniel Ehrenberg (DE)
+
+- [proposal](https://github.com/tc39/proposal-signals)
+- [slides](https://docs.google.com/presentation/d/1-_4KHsG6a3ZLuWlV2zz3dwGk2O9R7keEqkerb97NDYQ/)
+
+DE: I wanted to talk about signals a little bit more, and in particular, going over the algorithms and the APIs that signals are based on, and focusing on some kind of core ones. We have an updated signals logo simplified, thanks to Anne-Greeth from the Ember community.
+
+DE: The goal: We want to understand the most important parts of the signal API and the motivation for the design. We will focus today on just the more core parts of the API; we may have to change other parts, partly due to concerns raised last time.
+
+DE: The outline: Signals are at Stage 1. We have polyfill with tests, people have been developing primitives and Matrix channel and a lot of interesting experience already with integrating signals into various frameworks. People are excited, but again this is going to be a slow project. I don’t expect it to be proposed for Stage 2 within the next 12 months, because there’s just a lot of work to do to prove this out.
+
+DE: This is the API. We have the Signal.State class with a constructor with the initial value. You can get it or set it. These are kind of combined capabilities-wise, and it would be nice to separate them, but focusing initially on something that then various wrappers could use and those wrappers naturally would already implement that separation.
+
+DE: Computed signals, you construct it and pass a callback, which is the thing used to calculate the value of the signal. It just has one thing to get because it doesn’t make sense because it’s set by the call back.
+
+DE: There’s one extra API called, untrack, which runs a call back while disabling auto-tracking. Now, this is a subtle API to use correctly, but experience with reactive frameworks shows that it’s necessary when you have sort of broader reasons for understanding that the tracking of other things is enough.
+
+DE: And finally we have the Watcher. Now, Watcher is a thing which can receive callbacks based on signals becoming, you might call it, “pending”: we don’t necessarily know that the signal changed, but one of the things it depends on has changed, so it creates a synchronous notification to the Watcher so that later updates can be scheduled.
+
+DE: Signals are based on fine-grained pull-based reactivity with auto-tracking. I want to go into what that means and why it’s an important design decision. In fact, maybe it seems like these things with signals, they’re just sort of a trend and we’re making up things, but I think this is fundamentally a problem that has existed for a long time and continues to exist and everyone is converging on the correct answer and this answer makes sense for kind of inherent logical reasons that I want to explain.
+
+DE: The goal is, we want to be able to construct UIs such that the view is a function of the model, such that the UI–the DOM–is based on the state and it’s as if we’re recalculating the DOM based on that. So, in practice, frameworks have their templating systems and they have holes in the templates, and the holes are based on the model. The example that we were discussing last time, where you have a counter model and the model is not shown here, but it has a way to get the count and parity and also a way to increment it. In this template we have a couple of holes that are filled in with different syntaxes and ways to interact with the model, but either way, it’s pretty declarative, and people have all kind of converged on this.
+
+DE: So, how does a model change lead to the correct view change? Two strategies: Immediate mode is where on every frame you render the whole screen and blit it out to the monitor. And that does work for some domains, but it doesn’t end up working for web-based UIs in practice. And the alternative is fine grain incremental updates. You figure out which parts need to be updated and you just update the relevant things.
+
+DE: All frameworks these days have their way of figuring out how to do fine grain incremental updates, but when people try to not use a framework they end up kind of tearing more things down and rebuilding more things, immediate-mode-style.
+
+DE: auto-tracking, which I’ll get into in a little bit, is the idea that to figure out what dependencies exist. To figure out, in this template, what references ‘count’ and ‘parity’, so you can figure out where to go in the template and update. You could have a compiler get a conservative set of what might be referenced if they’re all statically known. Or, you could just watch which things are read at runtime. This (runtime) is the idea behind auto-tracking. There are a number of different dynamic scenarios where it’s useful for that to be at runtime. An effect is when we have a particular function which does auto-tracked reads, and when it becomes pending, it’s scheduled to be reevaluated.
+
+DE: To implement fine-grained incremental updates, the signal-based frameworks tend to break down the template into several different little effects that can be run separately when a particular part of the model changes. When the model changes, we can consider reevaluating it eagerly–when you set the state, you would go and find the computed signals that depend on it and recalculate those, and if that’s displayed on the screen, based on an effect, you would rerun that immediately. Alternatively, when we set the state signal, then its descendants come to be understood as dirty or pending, but you don’t evaluate them, but if somebody looks at them later, they can tell it’s outdated for a newer version.
+
+DE: Another way of talking about these two algorithms is, there are two options:
+
+- eager is push based, a.k.a. observables. It’s kind of driven by set, that’s what triggers evaluation,
+- with pull base you’re lazy and you’re driven by getting the value of the signal. So, something sets it becomes valued, and then actually running the computed.
+
+```js
+const seconds = new Signal.State(0);
+const t = new Signal.Computed(() => seconds.get() + 1)
+const g = new Signal.Computed(() => t.get() > seconds.get())
+effect(() => g.get());
+// log true
+seconds.set(1);
+// with glitch may log false
+```
+
+DE: To be more concrete about what the problem is with push-based reactivity, here’s a diagram from Wikipedia, they have a nice article about reactivity, and they have a diamond dependency problem. You could get into states where the graph is simply behaving in an incoherent way. Imagine you have a ‘seconds’ variable and there’s another computed t that’s ‘seconds plus one’, and then you want to see seconds plus one is greater than seconds. That should always be true, but if you update seconds and you’re pushing the update, and you go about it in this naive depth first, left to right order, you’re going to evaluate the comparison before you evaluate seconds plus one, so it’s going to see them both as the value one, and may return false, but that’s just incoherent.
+
+DE: So, the solution is, instead, when the get comes to the comparison, you get the whole set of dependencies, you topologically sort them, and seconds comes first and t = ‘seconds plus one’ comes second. Fundamentally, we ensure that ‘seconds plus one’ is evaluated before the comparison, so that it can be performed on the correct values. Because t ‘seconds plus one’ is evaluated before g ‘comparison’, because you’ve taken this global view of the dependencies, it’s going to come to a coherent answer.
+
+DE: When auto-tracking, the principle is that each time a computed signal is run, it collects its set of dependencies. So there’s a goal value tracking what’s the current computed and when you call get, whether it’s state or computed, it will add that thing to the current computed’s dependency set. So, logically when the computed signal is rerun it gets a different set of dependencies. You may have, here’s a simple example, depending on what A is, either B or C will be the dependency.
+
+```js
+Signal.Computed(() => a.get() ? b.get() : c.get())
+```
+
+DE: If the dependencies are A and B, and C changes, it won’t invalidate it. You can also have dynamically allocated data structures of a bunch of signals, so static analysis is entirely impossible. I want to mention that the dependency set in practice is often stable so this ends up being not as expensive as you might expect.
+
+DE: Overall we’re seeing a lot of convergence between reactive UI frameworks. I have a picture of a crab here, this is several different crustaceans have evolved to have a body shape that is similar to crabs. This is a very beautiful image of a crab from Wikipedia carrying her eggs. Several different crustaceans have all converged on a crab-like body plan because it’s very effective for certain lifestyles Frameworks are like this, where they independently evolved to be similar.
+
+DE: Though Solid was among the first to call them signals, many other frameworks were also looking at what sorts of glitches or what sorts of errors they have, and all coming towards the same design principles. We’re all looking at this problem space, and arriving at different parts of the sort of correct algorithm at different times. This includes
+
+- being pull based, so, when you write to a state signal, it invalidated the subsequent things without reevaluating
+- auto-tracking because it’s the most accurate way to get the dependencies leading to the most minimal set of updates.
+- For this the validation, if you’re trying to get the value, you look at all of your dependencies and see if they’re invalid and if they’re all valid, you can skip yourself, and most cache their value, and that
+ could be on the way in with edge version. Caching is “on the way out”, where each computed signal evaluates to the value, and if it evaluates to the same value it did last time, then it doesn’t have to invalidate it things that depend on it, it can go back and revalidate those values. This is why the equals option is on the computed signal, just comparing an old vs new value, because it’s comparing on the way out.
+
+DE: So, I wanted to stop here. I know we have a shorter timebox and I wanted to do a Q & A on the core signals API and semantics, if people have questions or thoughts about this model.
+
+MF: So I want to just ask a question about the thing you just mentioned about the computed signals based on the same values as it was previously, it won't have to be recomputed. That makes sense for simple, primitive values, but what if the value is complex? It's like an object that has properties that can be mutated, and then you could technically have that computed signal be a function of those properties, and even though its identity is the same as before, it represents something different.
+
+DE: Right. So that’s why computed signals have this .equals thing in the options that you can set. You can set that to whatever comparison function you want, including always returning false. Good question. Any more questions?
+
+DMM: Does this require that, well, you say you can find the .equals, but presumably you can define that the objects are immutable.
+
+DE: So part of this is based on trust. You can write whatever you want in the body of a Signal.Computed. It doesn’t have to be a pure function. We can’t validate that it is a pure function. But you’ll only be hurting yourself if it is not a stable kind of comparison.
+
+NRO: Regarding the case of objects with multiple properties. So kind of like a star. Is the expectation that I should use a signal for the object or put a signal in each property to only wrap the actual primitive contents?
+
+DE: It depends on what you’re trying to do. Signals are atomic and can replace one value for another value. And if you want to do finer grained diffing in that, if you want to do a calculation based on just part of the object, you might want to have a Signal that represents just part of that. Both so if that thing changes and someone’s using something else, they don’t get invalidated and vice-versa. Other invalidations that aren’t relevant to you don’t affect your subsequent calculation. In practice, the set of cases where you end up using a custom equals method is somewhat rare.
+
+DE: It’s a conservative safe to always return false, to not equal these things with each other. And some frameworks like Ember don’t do that caching on the way that, they evaluate things down stream, and that works out, things should behave coherently, it just might be more evaluation than you like.
+
+TKP: I just tried to wrap me head around this—you introduced this auto-tracking, this solution of this glitch problem, where you update the proper dependencies before you try to resolve it. But—you only arrive at this problem if you update your view before you are finishing updating your model.
+
+DE: Yeah. That’s the time that it’s most visible. This does occur sometimes in applications. Especially because you end up wanting to sort of stagger your updates and return to the event loop so other things can run. Sometimes you have a glitch and it is overridden before you get back to the event loop, so it’s not user-visible.
+
+TKP: If you have an incomplete model?
+
+DE: Yeah, you can. Because each one of these nodes could have its own view of the world. They could independently go to the DOM and write stuff there. These have real bugs that have been shipped in production websites. That’s why people put in effort to fix it.
+
+TKP: Yeah, but you introduced in the first slides that you want to have the view as a projection, of the model?
+
+DE: That’s right.
+
+TKP: You immediately break this concept.
+
+DE: Right, this is what we’re trying to avoid. So this comes back to the idea of fine-grained incremental updates, because we’re not doing immediate mode, it’s not going to be a function of the model. So we want it to be equivalent to the function of the model. We want to find an algorithm that reaches the same result as immediate mode, and incrementally calculating it with fine-grained updates, that’s the goal. Some algorithms are incorrect. “Glitches” illustrate the incorrectness.
+
+TKP: So, do you track these changes to the model correctly?
+
+DE: Yes, the signal proposal has a correct algorithm for calculating the updates to the view. And the correctness is based on being pull based.
+
+SYL: Yeah, I think this is just a rephrasing of DE’s answer, but to some degree, the reason this is strictly, full evaluate the model before beginning the view isn’t totally satisfying, it forces you to eagerly evaluate everything in the model whether or not the view currently may depend on it. If you want to be lazier, you don’t know that yet, you have to sort of at full-time sort of get the correct final answer to what the model’s version of this computed is in one pass. And get the right answer the first time, every time. Which is what this sort of invalidation propagation phase is about.
+
+CM: So, I have a question about the dynamic nature of the dependency list. If you go back, you have the example with a question mark. If I look at this, indeed the dependency set can change depending on the value of A. But if, if we’re in a situation where say A is true the first time, and so the dependency set is A and B, then if B changes, then it will get updated. Because it will be in the dependency list. If B changes it gets updated. And if C changes, it got missed, in this case, it doesn’t matter, if C changes A is true. The fact that the dependency set doesn’t matter in this particular case. So just in terms of refining one’s intuition about how this works. I’m curious how much that sort of reasoning about, well, yes the dependency set varies, but it doesn’t matter. How much that intuition generalized to more complex examples?
+
+DE: So, this intuition only holds when we’re in the land of pure functions. If there’s any nondeterminism, for example, if we had, you know, if `Math.random() > .5 ? b.get() : c.get()`, it is inaccurate dependency set.
+
+CM: Let’s take it as a given these are all pure functions.
+
+DE: Right. I think it is a sound algorithm in the case, I was surprised by this initially, also when I learned about it. I thought, aren’t you missing something? But I think it just works.
+
+CM: That is counterintuitive, but it still works.
+
+DE: I agree.
+
+DE: For algorithms, there are two core algorithms possible. One is where each node has a generation/global version number. When you want to check whether another node is valid, you check whether anything depends on recursively is from, is newer than it–if you see anything going in the wrong direction.
+
+DE: The other algorithm is to push dirtiness and pull values. So, say, we have this dependency graph where F and all other nodes except for A are computed signals and A is a state signal. So, someone sets A. This isn’t the initial time they are being gotten, coming has done F.get. The dependency graph was set up. Now, we set A and want to figure out how to invalidate it and reevaluate it. When we set A, C and B will be eagerly colored dirty. D, E, and F will be colored as pending, maybe they are dirty.
+
+And then, once we start pulling F, once we do F.get, then it will, you know, start in this kind of tom logically sorted way, evaluating things and remarking their dirtiness as appropriateness, until all of the dependencies are there and it can get F. But critically it depends on the first step before this. It was able to push, the graph start out like it looks at the end, it was able to use forward-facing edges to get from A through F to other things as invalid.
+
+DE: So, that algorithm requires bidirectional edges. And that’s a problem that I would get back to if I had time to go through the whole deck. But importantly, there’s an isomorphism between the two algorithms. They always get the same answer. Not even just the same final results for F. They actually can run the same computeds in the same order. Even if you observed the side effects. There’s bookkeeping, they both have bookkeeping. And one important thing is about the garbage-collect-ability of signals.
+
+If you have a graph with bidirectional edges and a computed signal that you’re no longer using, even if nobody’s pointing to it as a usage to the definition, something that was previously there has kind of this forward edge to it from the definition to the usage. And so, somehow, that has to get unlinked to allow it to be cleaned up, if you have the forward edges. On the other hand, having forward edges makes it easier to implement reactions with watchers.
+
+DE: So let’s talk about the watcher API. The goal is to expose when a computed is dirty. So that then you can trigger reevaluation. So the typical pattern is, you put a watcher on a set of signals and when they become dirty, you schedule to run later a call to `.get()`. We don’t allow you to call `.get()` immediately in a watcher callback. The watcher callback is called synchronously when a signal becomes dirty. If you .get() it right there, it breaks this atomicity property, which right now, you can actually do multiple sets on different signals, and there is no way that any code that can access those signals can intervene between those. Because all it can do is schedule things for later. So the API looks like you have this notify callback that you can pass into the constructor. You can add signals to the watcher set and remove them from the set.
+
+DE: Now, removing from the set, we might consider cancellation as the way to remove it or disposal. That’s kind of a TBD. So, to implement effects with a watcher, this is kind of a lot of code. So I don’t think we have time to go through it. But the idea is that you schedule a task, like, a microtask that will .get() them all. So that lets you make a computed that will put something in the watcher and then it will run when it becomes different.
+
+DE: GC-able computed signals are important, because you may have signals coming in and out of existence as your program executes. They may be, for example, related to a particular component. So you may have sort of a list of things and then each thing in that list has some interactive element. And these elements of the list get created and deleted over time.And so we have to figure out a way that both state and computed signals can have lifetimes. One way is for the lifetime to be based on ownership. There’s something around it. Probably a component. That owns the signals that are created inside of it. And you know, it somehow knows they are related to each other. Then you can use that for disposal because when that outer component is disposed and there’s some outer mechanism to control its lifetime. It can dispose the signals inside of it that are related.
+
+DE: We can do a better job of model view separation by enabling computed signals to be GCd, but this runs into this bidirectional edge issue. And more, and more frameworks are adopting this GCable computed signal approach, which the champion group would prefer to adopt if we can manage to prove it is really fast enough, whereas our current polyfill is not quite there.
+
+DE: A computed is alive if either there is a reference to it from JavaScript, or an indirect reference, for example, in the closure in some other computed or included in a watcher. Unwatched, unreferenced computed should be GCable, bidirectional can hold them alive longer. One is to use WeakRefs for the forward edges. That is not great. WeakRefs have a lot of costs. And the other is to switch between the two algorithms I mentioned earlier depending on if something is being catch'ed. That’s how its owning complexity, but sort of the algorithm in the polyfill and the spec.
+
+DE: So next steps are to continue developing the polyfills, test, benchmarks, integration into libraries, frameworks, and applications and to collect feedback based on this. We have some known things that should be changed about the current APIs, and people should expect more than 12 months during stage one for this proposal.
+
+WH: How does this deal with graphs which reconfigure themselves? A simple example is this. You have a state signal A and two computed signals B and C. B, when you query it, queries A; if A is false it then queries C. And C, when you query it, queries A. And if A is true, it then queries B. So everything is fine as long as A stays false. That is somebody changes it to true. What happens?
+
+DE: So I don’t, I’m not quite following the example. But an important property of this system is that signal graphs are acyclic.
+
+WH: Yes. This is acyclic.
+
+DE: Maybe
+
+WH: This is acyclic once it settles down. But it gets tricky during transitions.
+
+NRO: You can repeat the examples, we will write it in the queue with the right dependencies that you mentioned.
+
+WH: Okay, A is a Boolean state. B queries A. And if it is false, then it queries C. C queries A, if it is true, then it queries B.
+
+```javascript
+a = state(false);
+b = computed(!a ? c : 5);
+c = computed(a ? b : 7);
+```
+
+DE: Okay. So, what’s the problem? I mean, doesn’t this settle immediately?
+
+WH: It settles immediately, but then A changes. What happens when A changes?
+
+DE: When A changes, B and C both get eagerly invalidated. The invalidation is based on the previously seen dependency set. So, the invalidation it might end up doing too many things if on the subsequent evaluation it doesn’t do, but I’m not aware of scenarios with pure functions where it will do few evaluations. Can you explain why you think it might do too few invalidations?
+
+WH: The issue is during invalidation it might get a temporary cycle in the graph.
+
+DE: Okay. Let’s follow-up offline. I would really like to understand the cycle scenario that you’re talking about.
+
+SYL: Yeah. I think we can probably take this offline. But the intuition, should basically be, because we get lazily, there is a stack of things being gotten. And the only sort of cycles that might matter after we’re done are the ones that are being instantiated by the current stack. So I think this example goes through, we may be—we may have to think harder about which sort of not quite cycles we make sure to allow if that’s actually helpful.
+
+DE: Right. So just to clarify. The cycle detection, because of the lazy get, can be completely online. Just each computed signal has a state of whether it is currently computing. And if you get back to, you know, if after the topological sorting you are accessing something that is also currently computing you kind of know there’s a cycle.
+
+### Summary / Conclusion
+
+- The goal of signal algorithms, which many frameworks have gradually reinvented, is to accurately recalculate parts of the function from the model to the view, as if the whole thing were run from scratch, but only re-running that part.
+- The pull-based, topological sorting algorithm, with auto-tracking, provides a correct, coherent way to do this incremental re-evaluation. Push-based models suffer from glitches.
+- It is useful for computed signals to be GC’able when not watched, but simultaneously to have Watcher callbacks which are eager. There are data structures that support this combination, but it’s a little tricky.
+- Discussion with the committee was mostly clarifying questions. WH raised a graph scenario to investigate offline for correctness.
+- Signals are at stage one. They have been integrated experimentally into a number of frameworks and libraries, but there is still significantly more experimentation and iteration to do.
+- We encourage involvement at the various levels. Please join our Matrix channel or unofficial Discord. There are a lot of opportunities for getting involved especially around coding both in the polyfill and tests and example use cases.
+
+## Atomics.pause
+
+Presenter: Shu-yu Guo (SYG)
+
+- [proposal](https://github.com/syg/proposal-atomics-microwait)
+- no slides presented
+
+SYG: Excuse me. Okay. So first topic is atomics.pause. There is nothing new basically since last time it was presented. I don’t remember when this was renamed. This was previously called microwait.
+
+USA: Sorry for the interruption, SYG. Just a second, I forgot to ask for note-takers. Would somebody like to help out with notes for this final session? Thank you, BAN and JKP. Please continue, SYG.
+
+SYG: Right. So this was previously called atomic’s microwait. After feedback that the microwait is kind of unhelpful and that pause, this is renamed pause, which is also perfectly good name and also happens to be what some of this equipancy CPU instructions are already called. But otherwise, this is the spec text I’m showing on the screen here. There’s no changes in the proposed behavior, which is to say there’s basically no behavior. It is purely about timing. So there is no observable behavior.
+
+It does some, it takes an iteration number, which is a hint, if you would like to implement exponential backoff in your waiting algorithm. In your spin loop waiting algorithm, it doesn’t really do anything with the hint, other has it validates it is a non-negative integer, if you pass one in. Otherwise it doesn’t do anything with it and it just returns. So hopefully, this is quick, before I ask for stage, I’ll try to ask for stage 2.7 first. But before that, let’s open it to queue.
+
+WH: There is something really bizarre here with this API distinguishing +0 from -0. We should never do that unless there is a really compelling reason. So what’s the reason for that?
+
+SYG: So there are two—so—the intention is basically to make this—okay. So, why does it do any checks at all? Is to kind of the precedent we set with the let’s stop caressing things. It distinguishes negative zero from plus zero. One there was feedback from JHD that would be a nice thing to do. Because the intention was non-negative integers. For V8 at least, it has the nice implementation simplicity benefit that you can just check if the input is what we call a small integer, which are SMIZ, the untagged things not allocated in the heap that live in the value representation of like a pointer, basically. And negative zero is a thing that is not considered a small integer because it needs to be distinguished. So if negative zero and positive zero were accepted, were both accepted, you either have to do them, to do normalization or you have to, like, there’s a few more branches. It’s not a big deal. This is totally open for change. But that’s basically the reason.
+
+WH: So, if somebody gives you 1040, it’s fine, right? That’s an integer.
+
+SYG: Yep.
+
+WH: Yeah. But that’s not a small integer.
+
+SYG: That’s also true.
+
+WH: One of the invariants of the spec currently is that we do not distinguish plus and minus zero for counting things. This violates an invariant. So—
+
+SYG: I’m happy to take step B out, that’s concretely what the concern is, right?
+
+WH: Yes, this will come up in places, like if we use our shiny new function to add a series of numbers. If you provide no numbers to it, it produces -0. That’s the additive identity element.
+
+SYG: Right, of course. I hope you’re not using that in a spin wait loop. But yeah, sure, that’s valid feedback.
+
+WH: I’m fine with this if step 1b is deleted.
+
+USA: Okay. MF is on queue next?
+
+MF: I was going to say the same thing as WH, we should not distinguish minus zero. You can arrive at minus zero in various ways. We try not to distinguish them elsewhere. It would be surprising to distinguish them here.
+
+USA: Okay. Next we have KM.
+
+KM: I guess, in your implementation, but for NaN boxing implementation, checking for negative zeros just like more work. So I guess it is sort of like a… it’s not really a win for them. It seems like given the constraints on, or given the inconsistency with the rest of the language seems unfortunate to check for that. It sounds like you’re not going to anyway. So yeah.
+
+USA: Next we have NRO.
+
+SYG: Sorry, before we move onto the next topic, which seems to be a change of topic, yeah, it is not closed, of this one, it seems like there is plenty of good reasons to remove step 1 point B here, so I will be removing that. Okay, please continue with the queue.
+
+NRO: Okay. Yeah. It is, I’m going to turn now to recommendation for the type spent waiting—increasing together with the parameter. Should that be a normative requirement that plus one waits for more than close end? Or would it be okay if just, like implementation to have pause and plus one to be faster than pause and—
+
+SYG: I don’t know what it would mean to make it a normative node. Like, timing is not an observable thing. What would that even mean?
+
+NRO: I think we do have some, like normative text with regard to maps and set, they should not do things in linear time. As in—
+
+SYG: Yeah. Okay.
+
+KM: Okay. I guess on that same note, like, I mean, even if you normatively say the implementation has to take longer, you cannot rely on that, the OS or CPU could not restart you on the smaller number, the bigger number, anyway, it is not like there is control over that. So it is kind of like a meaningless normative, I mean, it would be impossible to actually verify, to like ensure that.
+
+USA: Should I move on with the queue, SYG?
+
+SYG: Yeah. Seems like the responses or the topics are similar.
+
+USA: Yeah. So WH you are next.
+
+WH: Is the intent of the iteration number scale to be linear or exponential?
+
+SYG: The argument itself is intended to be linear. Let me bring up another example if I have it.
+
+SYG: But yeah, the argument itself is intended to be, to be linear. I will switch sharing tabs for a second. This is just the example—this bit of pseudocode is how spin loop usually looks in a new text. And there is some spin count. And the idea is that this spin count would be passed to `atomic.pause`. And `atomic.pause` would choose to interpret that input which is linear. And the intention it can interpret that input as exponential back off.
+
+WH: Well, this example actually makes sense with either interpretation, but it does very different things. If you increment _spins_ by one, it could mean to spin for one more constant time, or it could mean double the wait.
+
+SYG: Right. And I’m leaving that choice to be the implementation’s choice. Whether that is exactly one more pause. So the usual way you would write this like with the line assembly or an intrinsic that is actually causing a pause, you can choose ST implementer writing C++ or C or whatever do linear back off or exponential back off. The reason I didn’t do that for the JS proposal is because the call overhead is pretty high. If you’re not in jet code. But if you’re in jet code, the call over head could be completely inlined away. So for when you’re in the interpreter, I expect the implementation of atomics . wait to wait a different amount of time depending on input iteration number then for the inlined version in the optimizing jit. Does that motivation make sense?
+
+WH: Possibly. My main point here is that we should make it clear in the spec text whether, if one wants to wait twice as long, one should add a constant to the value or whether one should double the value.
+
+SYG: Okay, understood, I would document that, that the input number is intended to increase linearly.
+
+WH: Okay. Thank you.
+
+USA: We will move on with the queue. I implore all of the remaining speakers to be quick. Around 40 seconds. But yeah, next we have DE.
+
+DE: Those, those three notes look good. I think it would make sense for them to be normative text basically in their current form. We’re not restricted as a specification to only do stuff in terms of some JS abstract machine. Most specifications do not restrict themselves in that kind of way. But it’s okay if it ends like this for me.
+
+NRO: Sorry, what I meant before that this, like, usually we put green notes the mean this is just clarifying what is otherwise being enforced in some other part of the spec. That’s what I mean by not normative note.
+
+DE: Please, everyone should stop saying normative notes. Notes are informative and the other text is normative. So if—I know it’s been said a lot of times in this committee, but it is confusing and not a thing we have ever done, I don’t think we should do. This text, I think, should just be normative. It should just be in white.
+
+NRO: I agree with you. That’s what I was suggesting.
+
+USA: Okay. Perfect.
+
+USA: All right. Unless you want to respond to that, SYG. We have RGN next.
+
+RGN: Unexpectedly in the same vein, and speaking as a reader of specification text, I’d like an explicit note step that identifies the point where an implementation waits (if it is going to do so).
+
+SYG: That sounds fine to me, but can I ask why you would find that helpful as a reader?
+
+RGN: Because the emphasis when looking at the behavior of an operation is on the algorithm rather than on the editorial notes coupled to it.
+
+KM: I guess, I guess I still question, as notes can really be normative. Saying you have to do, normatively you are required to do the best practice of the underlying architecture for a spin loop. It seems almost impossible, because that can change based on CPU version, and you might not distribute different binaries on a new CPU. And a new CPU comes out and you have not updated the binary, you are not normatively correct on the CPU. I don’t know what that means.
+
+SYG: There is no observable behavior. Like we can say, I don’t understand, we can say whatever we want, sure, but I don’t know, for implementers to read the notes at non-normative, I don’t know what that means, this is implementation guidance in the past is non-normative.
+
+KM: Yeah, so I’m confused on—
+
+USA: There’s a reply by Michael.
+
+MF: Yeah, just responding to that one point by KM. We have implementation-approximated. This is similar to that, at least.
+
+SYG: It is not at all similar. Implementation approximated. It is about approximating an observable result.
+
+MF: It is; do your best given the platform restrictions you may have.
+
+SYG: If you can observe the degree to which you have like trig functions, that is implementation approximated.
+
+MF: Nobody brought observability into this.
+
+SYG: I’m bringing observability into this. Like that is what normative means. Right? For a piece of spec text. Implementation approximated is, like I call sine of X. I know what the mathematical answer for that is, I can check how close it is to how close the result I got to the mathematical answer. Here there is like nothing to check. I’m not sure what it even means to normatively approximate the ideal there. There is no single or ideal here. I just don’t see the analogy.
+
+DE: I don’t want to completely say a stricter requirement, for example the cases that Keith said, we can’t say that it has to be, one thing has to be longer than the other. I think SYG is probably pretty good wording for this—anyway, the ask for implementers is nothing. Absolutely nothing should change in implementations, but the purpose of the specification is to coordinate expectations. The purpose of adding this feature is to get on the same page about what we’re doing. We are doing that. Implementers will all, you know, implement this exponential back off or something approximating that, so let’s just make that part of the normative text.
+
+USA: So we are pastime. And looking at the stuff that MF’s put on the queue, I think maybe we could go for—consensus and then this could be discussed. What do you think, MF?
+
+MF: Consensus on what?
+
+USA: That’s a good point. SYG, what would you like to ask for?
+
+SYG: So it is MF’s queue was—so, MF had two queue items.
+
+SYG: I will ask for an extension to drain the queue.
+
+SYG: But I still would like to ask for 2.7. If the holdup is exploring editorial conference here, I don’t want to held up two meetings because people are debating on how we should write must notes. Like, that seems like not a productive use of holding up this proposal.
+
+MF: Okay. If we're doing the queue, I would like to start my topic
+
+MF: Yeah. I would be more comfortable with 2 than 2.7. Technically, I think we are fine going to 2.7 because there’s editorial discretion and we can resolve that. The reason why I prefer 2.7 is it’s possible because of the possible editorial space here, how we try to represent this, that depending on the direction we go editorially, we may feel it’s necessary to just run it by committee again to make sure their understanding is the same as what our understanding was with what we are trying editorial. But I am not opposed to Stage 2.7. Remember, though, if we stay at Stage 2, there’s nothing wrong with going directly to Stage 3 if you have the proper testing and experience, which is my next topic.
+
+SYG: Sure. Let’s go into the next topic then. I will respond to it that real quick. I would like—okay. My actual goal is, I would like to get Stage 3 next meeting. I am—I don’t care to much about which stage 2 or 2.7 we end at this meeting, so long there’s an understanding that I'm not asked to wait another round of plenary due to exploring the editorial space come next meeting. If we can work out this stuff between this meeting and the next meeting, I have no concerns what the exact stage we end up. Because the bar here is that nothing we—it seems like the concern will have no impact on the actual like technical parts of the proposal. Right?
+
+MF: Yes, I agree. And I think it’s perfectly fine to go from 2 to 3 at the next meeting, assuming we have resolved anything we need to resolve.
+
+SYG: okay. Then let’s please take care of the next item.
+
+MF: We have a reply before that.
+
+USA: There is point of order. SYG, the rest of yours as well. Did you want to continue draining the queue, or… ? Effectively transferring time?
+
+DE: Okay. I think the remaining questions here, although they’re normative, because about what normative text are, they are editorial questions. I think it’s not going to change the shape of the API. So I think the most appropriate thing for us to do would go to Stage 2.7. Now, about testing, if these are being put in notes because it’s impossible to write a Test262 test for it, I think we should be able to lend normative spec text which is impossible to write tests for sometimes. We can’t test all the SharedArrayBuffer memory model stuff. So anyway, I think we can work out this question, whether it’s in notes or not, you know, any time between now and Stage 4. And I would encourage us to go with 2.7 rather 2, to send the strongest positive—we don’t want to put up extra barriers at the same time as we are going to figure out the best wording possible for this before it’s merged into the spec.
+
+USA: Next, there’s MF?
+
+MF: Okay. Well, yeah my next topic was on testing. I just wanted to know what your plans were, if you thought about what we should have as the Stage 3 entrance criteria given the impossibility to write most of the tests for this. You could really only test for its presence and callability or something.
+
+SYG: That’s pretty much my plan.
+
+MF: That was the plan?
+
+SYG: Test for API surface. That, like, you know, function length is correct.
+
+MF: Given that, I just wanted to make sure, preparing you for advancement to stage 3, is there anything else the committee would like to look for other than those simple presence and callability tests?
+
+DE: I am curious whether browsers will have their own downstream tests for this.
+
+SYG: I don’t think it’s testable. It’s not observable even to the OS. Much less to the VM on top of the OS.
+
+USA: So that was the queue, SYG
+
+SYG: Great. Thanks. I will come back with the formal consensus request for 2.7 with 1.B removed. The highlighted line, this is removed.
+
+USA: Anyone? Also feel free to give any explicit votes of support. Here they come. WH. DE and ACE all express support.
+
+SYG: This is 2.7. The action items for next meeting is that MF and I and probably KG will try to first I propose we as the editor group come up with our preference, then we present what to do about these normative/informative notes to the community at the next plenary, as a prior to probably Stage—technically for Stage 4, but I would like it to be settled before then. Sounds good to you, MF?
+
+### Speaker's Summary of Key Points
+
+- Core semantics unchanged (there is no core semantics, it's an unobservable wait)
+- Will remove -0 checking (step 1.b) for optional iteration number argument
+- Ongoing editorial discussion for how to best present implementation guidance notes
+- Asking for Stage 2.7
+
+### Conclusion
+
+- Consensus for Stage 2.7
+- Editor group to come up with recommendation for implementation guidance notes and come back next meeting
+
+## Shared Structs discussion around methods
+
+Presenter: Shu-yu Guo (SYG)
+
+- [proposal](https://github.com/tc39/proposal-structs)
+- [slides](https://docs.google.com/presentation/d/1aeXqO6uR_HVuWyciudHGRCd8J12UGhqwygYHl3FOmVc/)
+
+SYG: All right, thanks. Next topic, let me… share. Are people seeing the new screen? I can’t see what my screen is. Okay.
+
+SYG: So this topic is—this is not asking for a stage advancement, it is not even an update. It is a recap of what is currently proposed for what to do about methods on shared structs and concerning the concerns that MM have brought up with the broader audience. The first part of this slide deck will be decree capping the motivation and the actual mechanisms being proposed here and why they are proposed. And then there will be some discussion topics.
+
+SYG: So the motivation for having methods on shared structs is basically that programming is defining data structures and then having procedures and behaviors on those data structs that has been defining these between threads. But the second part of that is, while you would like to also be able to do things with those shared structures, how to define functions on those. There’s some assumptions before we go on to explain how we end up or have currently. As you know, JS functions as we have them today are deeply shareable. They’re tied to the creation realm. That kind of thing.
+
+SYG: So they’re just very staunchly not thread-safe. It is possible to propose new exotic callables that are more restricted in some way, that could be shared. But the design space is large and adding a new kind of function to the language is actually a—has a lot of downsides and I am not interested in exploring that space currently.
+
+SYG: So if we're proposing new exotic call backs and use the functions we have today, recall that it’s fine for unshared things to access shared things, the problem is unshared things—the problem is if shared things can access unshared things. It’s fine for JS functions to access shared data. So the upshot of all of this is, let’s see how far we can get without introducing new exotic callables, because that’s a whole separate design space, and last time, I checked with folks, grant the years ago, everybody was extremely allergic to adding a new kind of callable with very different behavior.
+
+SYG: So those are the assumptions.
+
+SYG: Most of this talk will have running examples of this simple 2D point structure. I got to 2D point structure. Y and Y. And I want to calculate distances between two points.
+
+SYG: We have a sucky option, which is we use free functions. Imagine you were doing object-oriented programming in C, before the old languages came along. Am I still on the thing? My video feed froze.
+
+CDA: You’re on sucky option free functions.
+
+SYG: Okay. All right. We can do free functions. So if you want a distance method on point, you can make a free function called distance that takes arguments as points and then computes the distance. We try this first in the prototyping efforts. We led with this because it’s strictly less work. It seems fine on paper. You can tell people, it's bad you don’t have methods, but use functions. You have functions. That’s fine. The problem with that, we got unanimous feedback to incrementally adopt into the codebase. Unless there's a reason, this doesn’t encapsulate anything. We could program everything like in C, but obviously we moved on from that because there's a better organizational tool for the codebases.
+
+So why does this make incremental adoption really hard for the future? Say you have existing code you want to parallelize a corner of. This is the most common case that people want to use this feature for in existing JS codebase sincerely. You have a big codebase that already has something. Game engine, something. Most programs are not embarrassingly parallel, but kernels. They look at the giant applications that have performance pressure and identify the subsystems to be parallelized. So say the corner you want to parallelize is this distance thing, with points. You’re probably adopting—trying not a feature in the codebase that has a point class because everything is single threaded today. That has a method. Like chances are, that your code is not organized with free functions today.
+
+SYG: On the left side is what is the point class with a method looks like. This is how you would write it in JS today. If you were to use it, use dot negotiation to call a method. That’s what you would do today. If you want to make this shared and use free functions, you have to make now—you have to make a global change. You can’t just locally change the definition of point and the distance method. You now have to change all of your call sites to use free functions instead of method notation. And you say, okay. Sure. But like you can do a code mod. You can do global search and replace. Yes, you can. It’s a pain in the ass. But the bigger challenge of requiring the global code mod to adopt the feature, it becomes very difficult to A/B test. Like it’s a global change, but a conceptually [inaudible] change. The only thing to parallelize is the point class. If you change the entire codebase and every use case, to A/B test the benefit of paralyzing that particular corner, you have it ship two binaries.
+
+SYG: Now, imagine that you have multiple subsystems, you want to run experiments on. This gets out of hand quickly. For like real world software, you don't want to require a global code mod to adopt these features. And it’s generally unergonomic. You have folks who want to use this, check if the input data is a shared thing, then you can’t use dot to the methods and use free functions. So the feedback from the early adopters from both Microsoft and within Google was it’s really difficult to incrementally adopt, if you don’t have method support. So if the sucky option is free functions, I think the better option is methods. You want to type this. Which is where you already can type in class. But if this is what you want to type, we have to answer some hard questions here. Where do the shared struct methods live? JS functions are not shared and shared data cannot point to unshared data because that is not thread safe. If functions are not shared, but the instances are shared, where do we put these unshared JS functions that are supposed prototype methods?
+
+SYG: So the first mechanism we are proposing to bridge the gap here is that the prototype, the bracket, bracket prototype of shared objects we make them realm local, meaning the prototype objects would be unshared objects. Here, I have like the included is supposed to be a heap. This is a heap of shared things. They would point to a realm-specific prototype in some realm, and because it’s realm-specific, and it’s realm local, there is a different copy of it per realm. So if I access my point inside realm A, I get that. Realm B, I get realm B’s prototype. It is thread-safe. Because it’s a thread-local thing, you can then put whatever values you want in it because there’s no restrictions.
+
+SYG: So the magic happens when you access the bracket, bracket prototype. It does a realm local look up instead. These things are themselves unshared. The prototype objects are unshared so they can point to anything. Conceptually, how this works is that implementation of course can defer. Conceptually, when you evaluate a shared struct declaration, you can conceptually create a storage key, when you have done multi-threaded programming in other languages, thread-local storage, they are keyed by a TOS key. This is basically the same, except instead of thread, I think the most sensible unit of organization for us is a realm not a thread.
+
+SYG: So you get a realm local storage key. The keys themselves are shared for the sake of ease of thinking about it, you can think of them as primitives, like a number or string or something. Each realm has its own table of realm local variables that are keyed by these storage keys and the initial value of this prototype will be whatever the shared struct declaration evaluates to. So going back to this example, if this were a class, right, you would evaluate this to a point constructor, creating a `point.prototype`, and that function—the point dot object would have a distance function on it.
+
+SYG: For the shared struct, point constructor which is thread local. That would have a point—realm local. The prototype object is realm local. And that local object would have a distance method on it.
+
+SYG: The analogy that might help you think about this is primitive boxing. If I type something like this. Draw.true string. How does it work this primitive. It doesn’t have a prototype slot or anything. How it works is that when you need to box a primitive to kind of do a prototype look up to call method, you look up the primitive prototype in this case bool in the current realm and use that as the prototype. Analogy to the mechanism is that the key for primitive boxing, the key is fixed. A boolean always looks up `Boolean.prototype`, and a number primitive always looks up `Number.prototype` and so on. For a shared struct case, the per shared struct declaration. Instead of always being fixed.
+
+SYG: So yeah. Realm—I think a realm local prototype are like primitive boxing, but the key is per shared struct evaluation. So that’s mechanism 1 for how—where do these unshared functions live. They live in these realm local prototypes.
+
+SYG: Okay. That means we have a place to put them. That, but we still have usability problems. And we’re calling this usability problem the correlation problem.
+
+SYG: The question; we have these realm local prototypes and what is the initial valueOf these realm local prototype objects? If I evaluate this shared struct declaration inside realm A, it’s pretty reasonable, we get to inside this realm where the shared struct declaration was evaluated. Like if I evaluated this, I would expect the prototype to have the distance method on it. Inside realm A. Because I evaluated the declaration in realm A.
+
+SYG: But the whole point of this proposal is to share data with other threads. If I postMessage my point instance to another realm in another thread, what should that realm's local prototype object be? In realm B, I have not evaluated any shared struct. Point declaration. All I got in realm B was an instance of the point shared struct.
+
+SYG: So realm B can have a realm local prototype for point, but what should its initial valueOf this pretty type be? Does it have a distance? Does it have anything?
+
+SYG: So if you don’t do anything, basically, realm B doesn’t have anything on its realm local prototype. How does it know what should go into it? All it knows is that my point share struct has a realm local proposal tight and it was looked up according to the key in the realm local table. But it doesn’t know what the initial valueOf that should be. So really, there’s a choice to say there’s—nothing should be in there initially. But that’s pretty unergonomic that way. The expectation obviously is that you have a distance method that can be called. That is basically, behaviourally the same as the distance method. As evaluated in realm A, except a different JS function object.
+
+SYG: So the correlation problem is how do you correlate the prototype objects of a shared struct between different realms? And broadly, there’s two ways. Manually do it. You have a manual initialization handshake phase where you kind of communicate all the shared struct types that you need for your application to all the threads. And then all the threads know how to programmatically set up that, once the application once it runs as a communicates the structs back and forth, it can call methods as you would expect. There’s downsides to the manual approach. It’s more code. It’s more startup. But the main problem from the performance point of view, this means your application must have a serialization point at startup time. This is bad for loading performance. You can’t just load all your threads. You have to load them. Now get ready for the correlation phrase, I send you the struct and you set up the prototype and after you can start the application.
+
+SYG: And that’s ideally lake to avoid that serialization point. So that leads us to think, can we solve the correlation problem automatically? Somehow? How do we do it without the initialization handshake phase. That’s the second mechanism we are proposing which can called auto correlation. So or for the sake of being concrete, I have not presented any strawperson syntax at all. Imagine in the shared struct declaration, there is some incantation you put there, this is a shared struct I want to be auto-correlated. And what this does, is that if you saw, a shared struct is auto correlated, the definition here is inside some module point.mjs. I import from multiple realms, I want things to work before explaining how we think this can be implemented, this is the goal. If I say this shared struct is auto correlated and import it, I should be able to indicate instances of that struct to different realms and have the prototypes and observed prototypes set up without any manual correlation.
+
+SYG: So how we are proposing to make that work is that if you declare a shared struct as auto correlated, remember this realm-local prototype key, we then say that this key is its source location. So in this case, the source location is point.mjs and whatever the cursor offset is when the declaration starts. If multiple realms evaluate the same SourceText containing an auto correlated shared struct declaration things behave intuitively and just work.
+
+SYG: So the idea is that I have my declaration in its own file, which means that every time I evaluate it, no matter how many times I evaluate from which realm, because it’s in the same file, in the same source position that determine its key, realm locally prototype key. So this point and this point in realm A and B have the same key. And when I evaluate point from point JMS in point A, I set up my realm and assign distance to it. And the same thing for B. And these can correspond because they have the same key, which is the source location of the shared struct point declaration.
+
+SYG: So these are the two mechanisms we are proposing to make methods on shared struct just work. Method 1, the prototype themselves are realm-local. And mechanism 2 is this opt-in auto correlation mechanism.
+
+SYG: And I want to make a point here that the two mechanisms are distinct. Before moving on to the next portion of the discussion, the concerns are with the auto correlation mechanism. I want to drain the queue if there’s any clarifying questions first and after the clarifying questions, any concerns with the first mechanic E, the realm local prototype.
+
+SYG: So let’s go to the queue.
+
+CM: So in my experience, it’s not uncommonly the case that the relationship between parties that are sharing a piece of shared data is asymmetric. And it could be that one of them is more producer-like and one is consumer-like or, but there are lots of possibilities. And in that case, it is nice to have some sort of single place where you can define the behavior, but the behavior that makes sense in realm A, you know, the set of methods and the set of methods that make sense in realm B might be different because they are doing different jobs. And if you do something like this, you are going to end up with the same behavior, which exposes a bunch of inappropriate functionality. On each side of the shared relationship. So I am wondering if you have taken into account the idea that, you might not actually want to have the same shared behavior, have the same behavior on either side of the shared relationship, but nevertheless, you would like to be able to have a declarative form, that lets you associate the behavior with the data in a more traditional object programming way.
+
+SYG: I haven’t thought about that particular use case. It’s expressible in the current proposal, but non-declaratively, but manually. Imagine, if you don’t choose to auto correlate, you just remove this incantation. Then, when you communicate a shared struct to another realm, the other realm will have no behavior attached to it and if it wants to expose a different set of behavior, it can just set it up itself. Imagine, you have a point for producers.JMS and a point for—a point for consumers.JMS. The difference is, it can’t declare the type, but it can give a `point.prototype` being communicated to, choose to put behavior on that is only appropriate for its realm. You can programmatically do this. I don’t know how you declaratively do this. The asymmetry is a runtime programmatic symmetry. We can’t say, these realms are—have different views of the same type of other [rm]s
+
+CM: This seems like a common case. One of the reasons why you would have something shared is if you are trying to do some kind of scaling thing. But very often, in my experience, it’s really because there is a division of labor between the different realms.
+
+SYG: But so like yes, I agree with that use case. But I usually do not—I don’t think I have seen that reflected in like the type system—the type level layout of a class whereby dynamically from different threads you are unable to access certain methods. Like, I haven’t ever seen that. But you could set that up programmatically.
+
+CM: Well, more—yeah. Part of it is introducing a new mode of communication between threads, aside from just messaging. This sort of asymmetry is much more common when the relationship is strictly based on messaging. But in my experience, it is by far the predominant case and how does it generalize to the shared struct case. But I think this is something that would bear further thought.
+
+SYG: Do you have concerns with the—with expressing that asymmetry programmatically?
+
+CM: Other than the fact—well, you know, there is the ergonomics consideration. If you have to set up this—you have to have a mechanic for setting up the binding of behavior at one end to the other of the relationship, and rather than having it sort of automatically taken care of you as it is in what you are probing for the symmetric case
+
+SYG: There are extensions to this. During [str*ibz] evaluation, you are query sting about the evaluator realm. And then do something different, depending on what the realm tells you to do. But that seems like more complexity and interested in exploring that space, we can think about it. But it’s—it seems just—yeah. There will be more machinery needed to have this kind of asymmetric conditional evaluation
+
+CM: We may have inhabited different points in the application space in terms of the things we do. That asymmetric thing has been almost universal among a lot of stuff I have worked on historically
+
+SYG: I would have to see examples in any other programming environment with that asymmetry and district attorney clarified at the type level.
+
+CDA: RBN?
+
+RBN: Yeah. I wanted to reply to this because there are a couple important points here. One is, one way to look at this asymmetry, are you look at this as just message passing? Where you have functionality within one side and send it to some other service that accepts and does it work? Which is what we get with postMessage.
+
+RBN: The other is, something we did discuss, in the champion’s group and with the stakeholders which is early on, we discussed how would you handle a case where you wanted to bundle for a client—bundle for a main thread and have a separate bundle for a worker? And how would you then want to say [strip] out the functionality you don’t need so you have a slate linear portion of the point class in the worker versus the main thread based on tree-shaking, et cetera.
+
+RBN: And one mechanism we discussed or considered was the ability to just as we say, we have some type auto correlated incantation and other incantation we do within the definition that says that instead of correlating based on source location and offset instead correlate based on a predefined value like a UUID or URN or something like that. That you’re manually correlating. We discussed that and found that there were security concerns raised around it then becoming a mutable global registry that could be used to provide communication between two independent objects that should have otherwise no other means of communicating, or security environments. So one of the references we have even been looking at is this magical auto correlation mechanism, it takes all of these things out of the hands of the users so they can’t be used [many license?] for these types of capabilities.
+
+And as a result, that imposes some limitations on what you can actually do with the declarations in those cases. You can’t split them and have a deferring version of them if you burn them, the bundler needs to be able to split out the functionality and many bundlers can do that today, split out shared functionality between multiple different entry points. And that is something you would have to do to have this auto-correlation mechanism work. Again, it gives you the benefit of not only the good developer experience and ease of use, but also avoiding mutable global communications channels that are security vulnerabilities.
+
+CM: Yeah. The one example that comes to mind is I have been thinking about is a thing called proxy IO, a major piece of the infrastructure that glued yahoo together. This is outside the JavaScript realm, this is C code. But the thing, the function as a correlation ID was in fact a specialized device that was added to the operating system which is not an approach that generalizes very well.
+
+MAH: I just want to quickly answer that there are ways to manually correlate types without having the security concerns at a global mutable registry and on. It does require defining a type that you can pass around. And so I don’t want to discuss it much more here. There is a lot of other questions on why—I want to note, this is not the only approach. Not all approaches have security concerns. That’s all I wanted to say.
+
+CDA: LCA?
+
+LCA: Yeah. I am going to skip over this. I think it was already answered in the matrix.
+
+CDA: Sure.
+
+LCA: I will move on to the next topic. How do you propose these auto correlation tokens or the source locations are compared across realm?
+
+SYG: I was thinking of the first option by position.
+
+LCA: Okay. I feel like this would…
+
+SYG: A vector for attack, someone would need to control the server to kind of serve you a different thing, but with the same specifier, the same byte position.
+
+LCA: Maybe. I feel like it would be—yeah. I have to think about this more, but my initial instinct on that I feel like it’s better to do something where you can guarantee that this shared struct is identical. It has the same SourceText in some sense
+
+SYG: Yeah. We can have—we can have an additional check of the source text is literally the same. Keep it around for to source anyway. Sorry. toString. Not to source.
+
+LCA: Sure.
+
+NRO: Would the proposal that I presented earlier today, would you get the—once, and use this model to create multiple workers? Or like pass it on to multiple workers. The module would have a global ID and this ID is what makes sure the source location is actually the same. It’s actually the same module loaded once and not loaded multiple times.
+
+SYG: That sounds fine. My only concern there is how we should warn people if they want to use this, they have to instantiate the workers in only one way. It might or might be okay.
+
+DE: I think it would make more sense for auto correlation to be based on one of the module specifier + the location within the file. Rather than tying it to anything to do with import source. If we’re okay with the dependency on ESM, but…
+
+LCA: Just to reply to that again, I don’t think—I think that means the same thing, Dan. I think import source ties to an import source that is keyed by the module specifier.
+
+DE: Right. So it would work to use import source, but there’s no dependency there. It would work just as well if you are just importing a module from—without import source. I had suggested a long time ago we use module expressions for this in somehow a similar way, you would send the module expression, it would track the identity. I think it’s easier to maintain identity by the path, by the module specifier, then to maintain it by the identity of the particular source object.
+
+LCA: Sure.
+
+DE: So I think—
+
+LCA: In GB’s proposal, the source object does not have cross-realm identity, which is the specifier.
+
+DE: Right. That’s where we landed on module expressions as well.
+
+DE: So yeah. I think it is more expense to conceptualize this—sure. It’s the same thing. Sorry.
+
+CDA: Waldemar?
+
+WH: Let me see if I understand this correctly. What happens if your shared struct definition is not at the top level scope and you evaluate it many times?
+
+SYG: Right. This would be—okay. What happens there is that you would get—one thing I glossed over when you evaluate this, you also get a constructor in your realm. Because that’s a function as well. So what would happen if you evaluate this shared struct in a loop, for each evaluation, you would get a distinct constructor, but if it is auto-correlated, you get the same dot prototype.
+
+SYG: This is different than if you don’t have an auto correlated incantation, and different from if were evaluating a class declaration, where you get a fresh constructor and a fresh prototype object, each evaluation.
+
+WH: Okay. So if a shared struct is nested inside a function, it could capture variables and have a different set of captures for each time it’s evaluated.
+
+SYG: That’s right
+
+WH: And I guess the last one would win for auto-correlation — or what would happen?
+
+SYG: A broader point you bring up is auto-correlation doesn’t compose well with not-top level evaluation. You shouldn’t do that to be evaluated multiple times.
+
+WH: Yeah. I understand that. I just wanted to understand what happens if somebody actually does that.
+
+SYG: Yeah. We could in fact just prohibit it. I think syntactically you could prohibit that incantation to be composable for nested things.
+
+CDA: RBN?
+
+RBN: This is something that we’re going back and forth about in the champion’s group as well. One way to look at this, you might be able to say that shared struct declarations are only allowed at the top level. They can’t be nested in loops and must be declared once. Essentially statically declared unlike how classes are evaluated dissipate the differences between how [inaudible] cannot be handled in that case F you said shared struct declarations could only be top level, you don’t have to worry about those concerns. But it has other caveats, it definitely limits where you could actually declare these struct declarations, but…
+
+CDA: LCA?
+
+LCA: I feel like this would severely limit where you use them. You would not use them anywhere with JS because that’s wrapped in a function and not top level anymore.
+
+RBN: This is conjecture at this point. We are working out whether or not that needs to be done. It’s not entirely settled yet.
+
+CDA: MM?
+
+MM: Yeah. The—so I will start with the first—SYG divided this into two questions—yes. Thank you. I have concerns with the realm local prototype mechanism. I have concerns with anything that associates behavior in this way with the shared struct type. The point example is a perfect example of what is, to my mind, really a contradiction in nature of the proposal and the rationale for it. The point—the x and y there are public properties, correct? That the methods—the distance method you wrote does not encapsulate the X and Y and others that have access to a point instance can invoke the methods. You can go directly to the [inaudible], correct?
+
+SYG: That’s correct.
+
+MM: Okay. In that case, this is a non-thread safe abstraction. You’re not—the surface of this API necessarily does not encapsulate the concurrency concerns. But they—important fact about shared state multithreading, the reason why BE used to take the stance with that multiple threading was going to enter the language over his dead body, is because it is extremely hard to program correctly and the experience languages like Java and C# is that programmers really underestimate the difficulty, overestimate their ability to incrementally adopt this to just take the programming patterns from sequential programming that they’re used to and add concurrency and assume things work, which they don’t. It’s really—it’s a horrible train wreck. One way to understand the train wreck, of trying to make this incrementally adoptable, is that the in general the JavaScript ecosystem is able to have extraordinary composability because with regard to the kinds of accidents, not malice, but the kind of accidents, that fallible programmers make generally abstractions that are provided by libraries are defensive against the expected form of accidentally fallibility. And the abstraction mechanism of the language are supportive of that defensiveness. Shared state multiple threading is a co-operative concurrency pair times. In Java, it’s for all of its problems, at least has two things that this one does not have: which is when exposing these things, the natural way to do is that anything that is going to be an object that is accessed by multiple threads beings all of its fields would be private fields, and therefore, not directly accessible from outside the object. And the methods would generally be what they call synchronized methods, meaning that each method would be a—have its own mutual exclusion lock. That doesn’t work. That’s part of what programmers misunderstand and underestimate, is thinking they can scatter synchronized methods and have things—that have that be adequate ways to do and the result is a mess. But your distance method here doesn’t even have a lock on it. The distance method that you wrote in your example promoting the proposal is itself a non-thread safe racy method.
+
+MM: So the—when you say that you need the behavior on the—on the object, accessible from the objects, in order to get incremental adoption, well, the shared state multithreading to use it correctly, and robustly in a way compatible with the ecosystem is not incrementally adoptable, even if it appears to be incrementally adoptable, you are going to create a harp. So the thing—so let’s return to the function that is—that is the main argument for allowing shared state multiple threading into the language at all, which is the coexistence with WasmGC C has the [db] has structs. Structs in Wasm GCs are fixed plain data objects. They don’t have methods or behaviors associated with them. The coexistence of Wasm in JavaScript and the—and the constraints that the spec does and does not place on host behavior clearly allows hosts to expose the Wasm GC structs to JavaScript as plain data, fixed shape objects because the fact that they’re racing concurrent under the hood is not something that violates any the host object. That’s that function.
+
+MM: Given that those things are going to be exposed to JavaScript anyway, why not also have JavaScript programmers be able to create, you know, to be JavaScript to be the origin of such structs and pass them around starting from a creator in JavaScript? And I think that’s a reasonable enough argument if we stop there.
+
+MM: And that goes back to SYG’s original proposal, you only operate on things from function from the outside. You don’t have behavior from the inside. The—given that the first function is coexistence with WasmGC, and its shared structs, none of these auto correlation or prototype inheritance mechanisms extend to Wasm. The behavior of these objects as exposed to Wasm is not going to be that the Wasm code invokes through the JavaScript methods and therefore the methods can’t encapsulate the concurrency concerns—I mean, it’s—it’s exactly the rationale of using Wasm as the forcing function that also makes this—providing these things with JavaScript behavior incoherent. It contradicts the initial rationale.
+
+MM: So I will stop there for now.
+
+CDA: All right.
+
+SYG: I understand your general position. I just don’t understand this specific concern. Like, the—as a very concrete counterfactual, if your preferred is functions unencapsulated I don’t understand how that addresses your concern that you can have thread unsafe code. Like, if the alternative is unencapsulated free functions, all the data on the structs have to be publicly accessible on all threads. I agree it is just difficult to get thread safe code correct.
+
+SYG: And like I don’t understand how free functions address that at all. Except to be more unergonomic for the same amount of bugs.
+
+MM: What it forces the programmer into is that if they want to expose an encapsulating object API, between the code—between where among the things encapsulated are threading concerns in general. In other words, expose a thread-safe API, the natural way to do that is to hide the—you know, hide the structs and hide the free functions that manipulate them behind a—and you know, another layer of code.
+
+SYG: You don’t want to do that. That means you are creating wrappers per thread. And that is going to—that will spin your memory use as you scale with the number of threads, which hurts salability
+
+CDA: We have maybe 2 minutes remaining.
+
+MM: Okay.
+
+SYG So okay. If we have 2 minutes, we are going to finish this mark in 2 minutes.
+
+MM: I will yield to the rest of the queue. But clearly, just to comment on that last thing, with X and Y public, it’s not possible in this propose as it stands for your API surface to encapsulate the threading concerns.
+
+SYG: It—there’s more of a chance for it in the future if we were to… if we’re to extend private names as they are today, we knead to work in a multithreaded way, but to a multi-state would require you to have you know this sets on it. In the long term, I think restricting to free functions is not going to be helping your goal. Relatedly for the state of WASMGC—yeah. Shared WasmGC today and how Wasm is exposed to JS today is up in the air. That is not a high priority thing to be worked on. Certainly, it’s a lower priority thing to work on than the core part of the Wasm proposal for getting the structs to be shared. But once we get that part spec down and prototyped. We're [ unclear ] it to what they want out of the JS Wasm API and when you need to get these out of Wasm and into JS and how to access them. I would imagine some of that feedback to be similar to what the early adopters of the JS prototype have given us which would then put us down similar roads. Except then, we do the Wasm. We will do that if the partner feedback says that.
+
+SYG: I have on here a list of bullet points I was hoping you could answer that. The concern I heard, you have a well founded wariness that people will get this wrong. But as I think more about it, I just don’t really understand what the—I think I just don’t understand. That’s not a design principle that I can work with this, if you don’t think it’s safe enough. There’s no way to—to only make bug free programs accessible. If we agree that that is not possible—
+
+MM: We agreed that it is not possible.
+
+SYG: Right. I would like to understand what you think is possible because I hear things like there’s not enough friction to discourage from getting things wrong. I don’t why cross-relation is not enough friction. I don’t understand—yeah. All this stuff. We tried to set up calls offline and we haven’t really been super productive there due to scheduling snafus.
+
+MM: I have lots of answers, but given limited time I will yield to the rest of the queue
+
+CDA: We are on time. Maybe we could, if folks could be very, very brief, I don’t know if that’s possible. Given the nature of these questions, SYG do you—can you see the queue
+
+SYG: I will try to answer Keith’s question. Can I answer that question directly?
+
+KM: Go ahead, then the point is, at that point, like, you just have—like, a forcing function in that event eye Wasm will have this, and the—what ends up happening, user friction in the form they will release, they will expose a function from Wasm that you will call to set your properties on the shared struct. And it will be the same as the JavaScript API. And probably generated by whatever the build tool is. I am not sure that reduces any of the bugs. It moves them into a different section. But yeah, that’s all I was thinking. \
+
+SYG: Yeah. I don’t think we have time to go through the rest of the queue. MAH.
+
+MAH: I wanted to say this not only about sharing behavior in the future. We will most likely want to introduce something akin to private fields or private data on those shared structs. And so any correlation mechanism that we introduce is not just for behavior. It’s to have access to the private data. Also, we keep thinking about that—that—those shared data will—the shared struct will also include working with Wasm code potentially. And I do not see how a correlation/auto-correlation based on module specifiers would work if you want to give access to Wasm code to private data on those shared structs.
+
+SYG: Yeah. I think private data is going to be a thing that is going to survive any FFI boundary. And the consider laying mechanism is for prototypes only, which is a JS concept rather than a Wasm one.
+
+MAH: I don’t see why you want to have private data accessible from different languages. That seems like a—
+
+SYG: Our sense of privacy is lexical. How do you private that across languages.
+
+MAH: Yeah. That’s one thing I wanted to us to think about
+
+CDA: okay. Sorry. DE, can you be very brief with your two items in the queue
+
+DE: I think we can make did so if something is syntactically auto-correlated we will choose the private names. That’s not simple. But it would be nice to have data presented for the cost of the handshake in the cat, NRO’s suggesting various mechanisms that could be handshake inclusive of handshakes, that might be—soundness issues. This proposal is great. I really hope it can advance in this form. And I don’t yet understand MM’s concerns either.
+
+SYG: I really—I’m sorry, MM. I remain—I don’t know how to make progress here. And unless you can commit to please showing up to more regular calls to really work this through, the only fix for this is time. And so for, we haven’t been that productive. That’s not true. We have been very productive. This point about the methods is a sticking point and I’ve been—I don’t understand the nature of the objection because it sounds like, to me, it’s unsafe. And I think it enables lack of safety and that’s not a thing—like a design principle. I can’t say, I will convey that you feel it is safe. I don’t know how to do that.
+
+MM: Okay. Yes. I will show up at more of those things. My intention was to show up for more of these things. I missed the last one purely by accident.
+
+SYG: Okay. And again, not a threat, but the alternative is that we do this in the JS Wasm API if this becomes too un-productive. That’s the real thing we are weighing against. Helpfully that is clear. The slides are there. So this list of questions, not just MM, anybody who has such concerns, I would like to hear your thoughts on these questions, on the slides. We have a matrix channel. Feel free to open issues on GitHub.
+
+CDA: There was a question briefly in the queue that I answered, but it’s worth confirming. If the meetings were on the TC39 calendar, which I answered yes. My understanding is you referring to the regularly scheduled structs meeting that appears in the calendar
+
+SYG: It’s like a working session or something.
+
+CDA: Structs working session.
+
+MM: Let me also suggest at some point, we should also bring all of this to the TG3 meetings.
+
+CDA: Yeah. The meeting is “JS Structs working session” on the calendar. Yes, we are also welcome—our agenda is sometimes sparse for TG3, which meets every week. We are happy to host discussions as well, if we don’t have another agenda topic on that day.
+
+CDA: Okay. We are past time. And on day 3. SYG, would you like to dictate the summary for the notes?
+
+SYG: Summary is, I recapped the mechanisms that we are proposing to approach behavior to shared structs. And there remains an impasse, in particular, with MM, in that I don’t understand the constraints on their side and we will try to work it out more in our regularly scheduled calls. But at the same time, that the Wasm thing is moving, and if we run out of time on this side, there’s a very real likelihood that things will happen in the JS Wasm API layer instead of TC39, which I think would be a worse result. But that’s a real possibility.
+
+CDA: Okay. That brings us to the end of plenary.
+
+RPR: Good. So I think we should thank our hosts. Mozilla and Aalto University.
+
+(applause)
+
+RPR: This has been an excellent venue and DJ sorting out the audio levels. Eemeli. As well as obviously, sorting out our social on Tuesday night, which was a lot of fun. And also, assisting with the conference to—actually on today and tomorrow, further front end. Hopefully lots of you can make it to that. We have the panel, where there will be 4 people USA, DE, SFC, and Michael, will be a panelist. I will be asking questions. So if you have any questions I should I ask or jokes to tell, let me know
+
+RPR: The next meeting is on the 29th of July. It’s 6 or 7 weeks away. Yes. That's remote. But for people who like in person meetings it’s at the start of October in Tokyo. All right. Chris, am I missing anything or are we all wrapped up?
+
+CDA: I don’t know. I am so tired I have no idea.
+
+RPR: We should thank you for calling and you have been awake a lot at the weird hours. Yes, of course. To our captioner, the transcriptionist, thank you so much for all of your work. And the note-takers as well. Some people have been very, very dedicated at this meeting and previous. I am not going to go through all the names, but always appreciated.
+
+### Speaker's Summary of Key Points
+
+- An update, no consensus seeking
+- Presented 2 mechanisms to allow shared structs to have methods: per-Realm [[Prototype]] and auto-correlated struct definitions
+- Mark Miller & co want it to be harder to write thread unsafe code, and would prefer shared structs to only have free functions, without methods
+- Champion group doesn't understand Mark's argument
+
+### Conclusion
+
+- Stakeholders to continue methods discussion in the already regularly scheduled shared structs working session call