Skip to content
This repository has been archived by the owner on Jul 29, 2021. It is now read-only.

Feedback on the proposed model #9

Open
JamesRandall opened this issue Sep 25, 2019 · 38 comments
Open

Feedback on the proposed model #9

JamesRandall opened this issue Sep 25, 2019 · 38 comments
Labels
question Further information is requested

Comments

@JamesRandall
Copy link

JamesRandall commented Sep 25, 2019

Issues

  1. Linking project scale to quality could create a strange chicken and egg trap for projects - to be classed as high quality a project has to achieve scale but to achieve scale, if this maturity model takes off, a project may need to be classed as high quality in many environments to be adopted in the first place.
  2. Again on project scale - that would seem to prevent otherwise very well managed projects that happen to serve a niche audience from being assessed as high quality. Perhaps this could be addressed by introducing other criteria on an OR basis with scale - for example: longevity. It's worth noting that you list this requirement at the top of your "in order of importance" criteria but the last item on this list is required. So scale would seem to be very important to you.
  3. I see no reason a project should need to be a member project of the .NET Foundation to be classed as a high quality project. The two are not synonymous and there are plenty of high quality projects that are not members of and may not ever wish to join the .NET Foundation. You state this as required on the policy page (unless a specific exemption is made). This is particularly troublesome when coupled to the point below.
  4. By requiring high quality projects to depend on level 2 or higher projects only means that this becomes a very closed loop.
  5. A high quality project is required to apply .NET design guidelines but these are very OO focused. What about F#?
  6. A project only requires user documentation at level 3 - to see adoption I'd suggest this is more important than this. I'd hope to see it at level 1 - but see my comment about the wide gap between 1 and 3.
  7. "Some of the qualities are critical and others are more supporting (they are listed in order of importance, per category)" - it would be useful to make clear which are critical, order of importance tells me little (in a list of 5 all could be critical or only 1 critical). As an example a projects membership of the Foundation is listed at the bottom of Health for a level 3 project - but the policy page states that this is required (so presumably everything above it is too) - I can't divine this from your criteria as presented.
  8. It seems like a massive jump from Incubator to High Quality and this could mean that many projects never make it and authors feel disincentivized to do so.
  9. How was feedback on this sought? 5 projects are listed as helping the working group and to join it as part of the public rollout but they are all existing high profile very popular projects (Dapper, IdentityServer, MiniProfiler, StackExchange.Redis, Newtonsoft.Json, the .NET team) who are going to find it easy to jump in at a high level on the maturity model. Were smaller projects and contributors consulted? If not I would suggest this has been put together with a degree of a kind of selection bias.
  10. Submission of a project to this process requires membership of the Foundation. Is this about project quality or drumming up membership? While membership is "free" you do ask for voluntary annual dues. That puts a pressure there and some OSS projects (one of my own for example) already have running costs over and above my time (Azure fees).
  11. Policies are mostly focused on project contributors - the "working group process" is very light on detail. Appeals? Transparency? Working group diversity? etc.
  12. It is noted that this framework is modelled on other frameworks (such as Apache's) however I would suggest .NET faces different challenges to other OSS ecosystems due to .NETs background (alt.net etc. etc.) and so "its this way because this works for other frameworks" falls a little hollow - though accepted, its a starting point.

Questions

  1. How are assessments going to be handled? How transparent will the process be? Appeals?
  2. Once accepted are projects regularly re-reviewed?
  3. Linked to the above, how are regressions down the ladder dealt with? Particularly with respect to dependencies - if a project is found to no longer be at level re: project forge feedback #2 then are all level 3 projects that depend on it automatically downgraded?
  4. Is this model going to be transparently applied to Microsoft's own OSS packages? Presumably it will need to be to satisfy the level 3 requirements.
  5. Is some form of tooling going to be provided that would allow an assessment / report of where dependencies sit on the maturity model (for projects with many dependencies, and dependencies on dependencies, figuring this out could be hard)

General

  1. Its hard not to conclude that the above seems designed to make this viral as @forki observed on Twitter - which runs contrary to claims that this is opt-in, if it takes off projects that don't opt-in will find themselves having to consider it or consider if this will damage their vitality. You can be opt-in on paper but in reality network effects may make this otherwise.
@richlander
Copy link
Collaborator

This is excellent feedback. Thanks for taking the time to write it up. Judging by the "reactions", it resonates with other folks, too.

Request: Can you number your points? Once you do, I'll provide feedback. The numbering will make it tremendously easier to answer and for readers to follow along.

Tip (in the case you don't know): if you just put "1. " for all the list items, the markdown renderer will number the list correctly.

@JamesRandall
Copy link
Author

No problem, happy to provide feedback. I've added numbers - hope that helps.

@mwadams
Copy link

mwadams commented Sep 25, 2019

One comment I would make is that the whole model seems to be barrier-based rather than accumulative.

For example, and relating to point 4 above, instead of requiring that a project only depends on Lev 2 and above to be Lev 3, this could be recast as a positive for a Lev 1 project that has accrued Lev 3 dependencies.

Clearly there may be some "veto-like" criteria, but I think an accumulative, positive model is generally better for encouraging participation and openness.

I am also curious as to why it has not taken a more obviously risk-based approach, because that is really the question that this kind of maturity model is trying to address. What is the risk.of taking a dependency on this project?

And of course, that risk varies on different axes.

@JamesNK
Copy link
Member

JamesNK commented Sep 25, 2019

designed to make this viral

Being viral (a trusted project can only depend on trust projects) is necessary.

Imagine:

  1. Newtonsoft.Json does everything right and is trusted.
  2. Newtonsoft.Json depends on LeftPad.NET.
  3. LeftPad.NET's build server is located in Russia, is compromised, and a backdoor is injected into the NuGet package.
  4. Newtonsoft.Json is now compromised.
  5. Applications that use Newtonsoft.Json are now compromised.

A chain is only as strong as its weakest link.

@mwadams
Copy link

mwadams commented Sep 25, 2019

But any of those links may be the weakest. E.g.

eslint/eslint-scope#39

No dependency should be trusted, and a more robust maturity criterion to address this risk would be the validation process for upstream dependency updates. The overall maturity of the dependency has little impact on the risk in this area, and the maturity of the security processes in the consuming project are much more significant.

Hence the reason why I think this should be reversed, and the fact that a lower level project is depended on by a higher level project (and subject to its stringent security review) should count positively to it, not negatively on the consumer.

(As a 'for example')

@richlander
Copy link
Collaborator

I agree that there is a significant requirement/burden today on the consumer to really validate that everything is safe. The goal is cut that burden down to a more manageable size for consumers so that using OSS isn't so expensive and the safety so undefined.

What is the risk.of taking a dependency on this project?

W/o a trusted security process, it is unbounded (modulo the permissions level that the app runs at: admin, standard user, ...).

Maybe I'm not thinking about it correctly, but I cannot see how the accumulative model would work, why/why taking a lower-level project dependency should be counted as positive and what is the fundamental value that the accumulative model delivers. Meaning, I don't understand.

One thing that is out of scope of the proposal, but is in scope for software engineering, is that you should take the fewest dependencies possible. This makes it easier to reason about software more readily across a variety of dimensions.

@mwadams
Copy link

mwadams commented Sep 25, 2019

I think that what I am saying is that some kind of point in time review by authority across all axes to gain admittance to a one-size-fits-all maturity club is necessarily less effective than an accumulation of confidence in those axes as determined by the consumers of those projects and their relative level of competence in that axis.

It also allows for a less prescriptive approach to any given axis, which will tend to stifle innovation.

@richlander
Copy link
Collaborator

Answers to the questions ...

  1. Project scale and quality -- This is good feedback. I don't have a good sense of how this is part going to work in practice. The main "test" is validating that community finds this library valuable, such that it is worth elevating. So, don't think of this as a working group choice, but crowd voting. That said, the working group needs to be super sensitive to reading crowd voting signals from niche domains differently than mainline ones. Perhaps you can help with that.
  2. Project scale -- same answer.
  3. Level 3 required to be a Foundation project -- super valid point. My answer is @ "Member project of the .NET Foundation" at Level 3 - why? #10. This point is absolutely on the table to discuss further as "Member project of the .NET Foundation" at Level 3 - why? #10 suggests.
  4. Closed loop -- That is pretty inherent. For example, a L4 project that depends on L1 (or one not in the ladder) doesn't actually deliver a strong value prop. This is largely covered @ Feedback on the proposed model #9 (comment)
  5. API guidelines -- sounds like the guidelines are broken and we should fix that. F# projects should be F#. We can fix this one as you see fit for F#.
  6. Docs -- excellent point. What do others think?
  7. Softness on critical qualities -- Agreed.
  8. Big jump from 2 to 3 -- That's largely a function of keeping to four levels and 1 and 2 being easy. Got a suggestion on how to fix this?
  9. Feedback before now -- Yup, you are right that we got feedback from the projects you mention (which, BTW was excellent feedback). I will totally do calls with projects that feel that they are representative over the earlier levels and we can work through the issues they raise. If you want to be part of that, awesome. Tell me if that is interesting and I'll organize it (and I can fit multiple timezones; sadly, I only speak English).
  10. Foundation membership -- Covered in "Member project of the .NET Foundation" at Level 3 - why? #10
  11. working group process -- Agreed. This is lightly defined. That's the next doc to write. These docs as a WIP.
  12. I would say these docs are inspired by other foundations, but not identical. There are really important differences. Also covered in "Member project of the .NET Foundation" at Level 3 - why? #10
  13. Assessments -- Needs to be written up and reviewed by the community. Coming soon.
  14. Re-review -- Maybe, but probably not. Probably only if a community member tells us that they think the level is now stale. However, a project will never be moved up or down w/o talking to the maintainer(s).
  15. Viral nature of dependencies -- Covered here: https://github.com/dotnet-foundation/project-maturity-model/blob/master/maturity-ladder-policies.md#projects-no-longer-meeting-ladder-requirements
  16. Microsoft packages -- yes, transparent. I talked to a team today and told them how to apply via the process defined @ https://github.com/dotnet-foundation/project-maturity-model/blob/master/maturity-ladder-policies.md#registration. There is no registration other than the one defined in that doc. The projects that are part of the pilot also need to use that same system.
  17. tooling -- interesting idea. We are not that far yet. It will be manual at the start.
  18. viral -- it isn't intended to be viral as an end-goal, but as a consequence of needing projects and their dependencies to be fully coherent. It is only level 4 that has the most viral requirements. Level 1 and 2 have none and Level 3 has it, but not as strong as L4.

Note: I will update my answers as needed, and keep them to this issue entry, even though the conversation might keep on moving forward.

@richlander
Copy link
Collaborator

@mwadams -- is the Apache model more desirable to you? https://community.apache.org/apache-way/apache-project-maturity-model.html ... It defines specifics, but no progression. Or you want something entirely different again? I cannot quite tell.

@JamesRandall
Copy link
Author

@JamesNK this proposal does a limited amount to address that as trust in a project, particularly its less code based aspects, driven by a review process is point in time based, the point in time being at the point of review (hence some of my questions about ongoing review). As a maintainer I could be approved as level 3 then immediately (deliberately or otherwise) take actions to invalidate that.

If I’m genuinely concerned about such things as a consumer I still need to review each dependency (and it’s policies etc) each time I take a version of it to assure myself things are as they were when last reviewed by the assessing panel.

Of course this isn’t a hard 1 or 0 type question - it’s a matter of making value calls around risk mitigation and some of it comes down to building trust in people over time. Ongoing review can mitigate it but that would need a committed cadence so that people making those risk calls could do so in an informed way.

As an aside but somewhat relayed as again the focus seems to be here: there’s a lot of focus on the consumer of projects throughout this proposal (and already in this thread) and not a lot on authors and contributors. If this does take off and the viral network effects kick in it is likely to add pressure to folk who often already struggle with the demands of maintaining packages - by the time you get to level 3 and 4 what is effectively being asked is for a project to be managed on a very professional basis for the benefit of consumers who are highly concerned with the risks this maturity model addresses - largely commercial entities. When OSS is developed by commercial entities this isn’t really an issue but I must admit to being concerned about the impact on the many people who do this on the side / as a hobby. Funding for OSS is not in a good way in general and I can’t help think that if we’re going to look to professionalise the OSS space that has to go hand in hand with looking and improving those aspects of it. it’s late on the UK. Thoughts maybe better formed tomorrow.

@JamesRandall
Copy link
Author

JamesRandall commented Sep 25, 2019

@richlander - thanks for the detailed reply as it’s late in the Uk I’ll go through it properly tomorrow.

However I’ll quickly chip in: without regular re-review it’s hard to see the value this provides to those looking to use the model to address risk in package adoption. It’s not uncommon in such environments to adopt a cadence for rereview of dependencies and risks. If there’s no regular rereview going on of a projects maturity level then the maturity model ratings value diminishes over time (half life somewhat dependent on the consumers sensitivity). And I’d argue the appearance of a badge on a projects GitHub page could give a false impression of a projects current status - and that of all its dependencies.

@JamesRandall
Copy link
Author

JamesRandall commented Sep 25, 2019

In fact thinking about it if you don’t regularly rereview projects what you are actually doing is reviewing people as well as projects. More so I would argue.

When you assign the rating you’re essentially saying that “we’ve verified this project is at level n on date yyyy/mm/dd and we trust the maintainer(s) to inform us if anything effects that positively or negatively”.

@JamesNK
Copy link
Member

JamesNK commented Sep 25, 2019

What policies are you worried about projects breaking after they been certified? If it is dependencies, then there are tools for automatically analyzing dependencies that a repo uses. @richlander, that might be something to consider.

Authors were considered. That is why there is a sliding scale of 4 levels rather than an all or nothing approach. And the .NET Foundation should take care of anything that requires buying something (e.g. a certificate package and Authenticode signing).

If you think some of the requirements are particularly difficult then you should be more specific with feedback on them. Make a case for why they should be moved to a higher level, or that they aren't important and should be removed.

@glennawatson
Copy link
Contributor

One of the big issues in the past with the DotNetFoundation is we've had problems with communicating with the DNF admin and also don't hear much feedback from the team members.

We've also had technical problems that were easily fixed but due to these communication issues went 8-12months without resolution (eg we had Azure DevOps issues and authentication issues where I was the only one able to approve builds for 12 months). Some of these issues still exist but are way less impact then they were 12 months ago.

Since a lot of these levels rely on having support from the DotNetFoundation and given the past experiences, it'd be nice if a lot of the more undefined policies would be also considered sooner rather than later. Some of the communication issues have been resolved but not all. @devlead on a weekly basis for examples suggests getting communication from the board members out there in the form of videos etc or similar mechanisms. For example the communication issues even exist in regards to these policies given we had no communication with a lot of project maintainers finding out through a tweet rather than through internal DotNetFoundation project leaders communication mechanisms.

I think from our project it's not so much the requirements that are a issue but if the DNF processes can handle it.

@JamesRandall
Copy link
Author

JamesRandall commented Sep 25, 2019

@JamesNK - will happily do so tomorrow (as indicated).

In the meantime perhaps expand on some of your own comments - for example how were authors considered? And perhaps remember that what those of us providing feedback can see is an output - not a thought process.

Edit: I’d certainly support automated verification where possible.

@JamesRandall
Copy link
Author

JamesRandall commented Sep 25, 2019

So a couple of quick examples just from level 1 of things that are time sensitive and could be difficult to automate:

Roadmap documentation - is it still being maintained? Is it up to date?
Is the maintainer still encouraging the community and responding to issues?
Is the review and merge process being followed?

If you move on through level 3 there are others and they are essentially the more subjective “soft” parts of the project rather than the code. It might be possible to remove the subjectivity by changing the wording. For example the point about fixing issues and encouraging contribution could become:

Issues are processed within a month of logging

Then you can measure it. But does that still have sufficient value?

To give some context when I read the maturity model and have commented here I’m thinking about the projects - not packages and code alone.

The maturity of a project when looked at holistically changes over time (OSS or otherwise) and it’s as much about people as it is code. I don’t believe the assessment of a maturity of a project can be entirely automated (though some aspects can be and signals can be derived even for the soft parts of a project but to do that in the OSS world with all its variety would be a challenge to say the least). And point in time assessments of systems that change over time have limited, diminishing over time, value.

With regards to the wide gap between level 1 and 3 one approach might be to take a more broken out or scored system rather than tiered. Score + colour maybe? For example @endjin have one here for their own projects https://github.com/endjin/Endjin.Ip.Maturity.Matrix (no affiliation other than knowing @HowardvanRooijen on Twitter).

@richlander
Copy link
Collaborator

To give some context when I read the maturity model and have commented here I’m thinking about the projects - not packages and code alone.

Absolutely. That's my thinking, too.

Ongoing review can mitigate it but that would need a committed cadence so that people making those risk calls could do so in an informed way.

Agreed. I said in my long answer about that we wouldn't re-review. My real underlying thinking is that we shouldn't only rely on the Foundation for review (Foundation is largely volunteers, too). I agree with you that we should have a review system, but think about the role of the foundation and community within that.

There are a few models (that I can think of to consider):

  • Review by the foundation on a cadence.
  • Review by the community on a cadence (the project comes up for crowd voting every n months).
  • Mixture of the two where crowd voting is say once/quarter and foundation review is once/6 or 12 months.
  • Pure reactive, where the community can report projects that they observe are no longer meeting their ladder level, and the foundation then does a review.

There are various levers here to play with. For example, the candence for L1/2 and L3/4 projects don't need to be the same. Could be twice a year for L1/2 and four times a year for L3/4.

Which do you prefer? Can you think of some others?

It might be possible to remove the subjectivity by changing the wording.

Boiling everything down to a number objective (for example, process issues within a month) is an interesting idea. I am hesitant to go that direction, at least initially, because that's something I would expect significant maintainer resistance to (and I would be entirely sympathetic). I can see how there is a benefit to consumers by having tighter SLAs, but I didn't view it as the most critical thing to put in place for consumers and didn't expect overwhelming acceptance from maintainers.

With regards to the wide gap between level 1 and 3 one approach might be to take a more broken out or scored system rather than tiered.

That system is really interesting. I'm wondering if we should do both. On one hand, it is crazy to have more than one scheme. On the other, the two models are trying to achieve different things. The key aspect to the ladder is a prescribed progression. It's a little bit like going to University. A degree program defines a prescribed progression of courses and you end up as a software engineer or an accountant. Anyone looking at your degree knows exactly what that means and they consider hiring you. This is as opposed to just +1ing on easy or interesting (to you) courses for four years with the expectation that you end up with a valuable degree that has meaning to others (hint: it won't). So, we could break out pure quality as a separate concept and make the scoring system. Clearly, more thought is required on that.

As an aside but somewhat relayed as again the focus seems to be here: there’s a lot of focus on the consumer of projects throughout this proposal (and already in this thread) and not a lot on authors and contributors.

This is a very real issue. The interest in dual licensing is very related. I have a few thoughts on this topic (some of which @JamesNK covered already).

  • The .NET Foundation should cover as much cost and headache as possible with automated systems. Clearly, not all of that will arrive on day 1 but we do need to push pretty hard on that angle.
  • I think we need to adopt a culture where it is fully OK if a project never makes it past level 2. I totally get the attraction to the highest levels, but if every maintainer feels the pressure of that, then we will indeed have a severe burnout problem.
  • Even though maintainers are burdened, it doesn't mean there isn't a need for such a model. We need to figure out what a good model is that has a balance, and that's what we are discussing. An alternate scheme would have been level 4 only (obviously with a different name) where only the top 100 NuGet libraries are candidates (for example) and those are the sole focus. Most maintainers are not (extra) burdened because they cannot participate in the program. This type of idea would likely be more of a Microsoft-led program to ensure the libraries Microsoft uses satisfy certain criteria. This isn't a theory. Microsoft announced a OSS sponsorship program earlier this week. That kind of program can (and obviously is) super narrow and not super transparent. While that's good for lots of people (not just those maintainers), it's not a broad-based program.
  • We do need to pair the ladder with some other programs that maintainers feel are helping them. The maintainer bench is one of those ideas, with the intention to train more contributors as maintainers, either for new projects (maybe from the forge) or existing projects for maintainers that are looking for help.
  • Related to that, we did discuss integrating GitHub sponsors throughout the flow (including on nuget.org) and paid support. I didn't include this because it is hard to define (more the support part than the sponsors). If there is a lot of interest in these topics, we can consider accelerating the development of some options. I hesitate to even mention some of these ideas because they are not at all thought through at this point.

I didn't address every single one of your points this time, but I think I did cover the bulk of it. Tell me if there is anything I missed that you feel needs discussion.

@richlander
Copy link
Collaborator

@glennawatson -- this is the first I've heard (except for you telling me earlier in the day) about DNF infra not delivering on its promise. I will directly follow up on that. This is a big concern for me because the proposal definitely relies on DNF infra that works super well and makes you happy to use it.

@glennawatson
Copy link
Contributor

I have emails going back to December 2018 that are unanswered from the foundation. Geoffrey Huntley for example attempted to cc on some issues those also went unanswered.
I had issues I attempted to open about the problem. They changed the GitHub repo around and they got deleted.
I have GitHub teams chat about the issue. We had a dodgy workaround in the end we still have to use with Mozilla sandboxes but at least it solves our issue. We gave up on making it perfect. Seems to be the use of active directory with dnf makes it very fiddly. Also the number of agents doesn't match what you'd get for free technically with the shared infrastructure but we've had it doubled which hasn't caused us to be under load but that required emails/issues/tweets to get fixed. Eg we were running out of runners since unlike each project having its own shares org and therefore 10 build agents we were sharing 15 (now 30) between 40 projects.

@devlead has become our unofficial leader in terms of doing communication. He setup with Jon a leadership Slack to try and mitigate our issues. His team help out the dnf a lot doing the newsletter admin. Even he's getting frustrated with the lack of movement in terms of board communication. The leadership Slack and the GitHub teams have made this better than it was but @devlead is doing prompting about suggesting the board get out there and advertise continuously.

These policies seem to rely on the dnf communication and infrastructure running at peak efficiency which they haven't always.

@glennawatson
Copy link
Contributor

Also one thing to note. ReactiveUI was the first project to go fully in on dnf infrastructure unlike very large orgs like cake and prism. So we been seen as these annoying people who keep complaining when others weren't but it's mostly due to being the largest project using the dnf infrastructure

@glennawatson
Copy link
Contributor

Also worth noting that my messages sound all doom and gloom about the dnf. It has improved and come a really long way just there are just those teething issues that these policies should have some way of addressing for either party.

@forki
Copy link

forki commented Sep 26, 2019

One thing that I already wrote on Twitter: it's perfectly fine for Microsoft to come up with a set of requirements that they give to the community and say: "look those are the things you need to do so that we can put your product into our stack". That would be a great thing if done by MS.

Putting the ladder into the foundation on the other hand makes it a thing that will spread to the whole community and will put pressure on projects that will never need to go into MS stack. People in companies will just blindly adopt it and now projects are called "untrustworthy" just because they don't align with a set of requirements that MS needs internally. That's not good

@devlead
Copy link
Member

devlead commented Sep 26, 2019

So, I'm too tired and got too much on my plate to be constructive ATM, so I'll try to keep it short.

I try to see my self as generally a very positive person, with ambition to have patience and see long term goal. I don't believe in twitter rants (fail at times :) ) and rather diplomatically reach out to people first before raging in public. If I weren't that person then I would probably make statements like: the foundation is

  • Under funded
  • Under staffed
  • Over committed
  • Not transparent/communicating enough

and therefor adding more things to their plate, before solving above seems unwise.

@devlead has become our unofficial leader in terms of doing communication. He setup with Jon a leadership Slack to try and mitigate our issues. His team help out the dnf a lot doing the newsletter admin. Even he's getting frustrated with the lack of movement in terms of board communication. The leadership Slack and the GitHub teams have made this better than it was but @devlead is doing prompting about suggesting the board get out there and advertise continuously.

@glennawatson Kinda feel giving me a bit too much credit... but I've tried to express feedback in a constructive way, assist projects, setup skype meetings and express issues and possible solutions when I've met @jongalloway and previously @martinwoodward in person.
I think one of the big assets for the foundation and community are it's member projects, and getting channels for us to communicate, exchange experiences and support each other has been super valuable.
We express our frustration, concerns and feedback because we love the idea of the foundation, we care and want the foundation to me successful.

@glennawatson -- this is the first I've heard (except for you telling me earlier in the day) about DNF infra not delivering on its promise. I will directly follow up on that. This is a big concern for me because the proposal definitely relies on DNF infra that works super well and makes you happy to use it.

@richlander how's you're experience been with the DNF infra team? Were they consulted during this proposal? Are they comfortable and got the resources to scale this?

@Aaronontheweb
Copy link

Have more thoughts about the proposal as a whole, but I wanted to respond to @glennawatson 's point about the existing .NET Foundation infrastructure:

Also the number of agents doesn't match what you'd get for free technically with the shared infrastructure but we've had it doubled which hasn't caused us to be under load but that required emails/issues/tweets to get fixed. Eg we were running out of runners since unlike each project having its own shares org and therefore 10 build agents we were sharing 15 (now 30) between 40 projects.

So I feel a bit bad about this - the Akka.NET project, every time we receive a PR, kicks off 8 build agents in parallel to run our rather complex builds (large test suite that needs to be run on many platforms, including long-running network fault tolerance tests.) Some builds take as little as 2 minutes per agents. Others, like a full build as a result of a root dependency being modified, kick off jobs that take as long as 90 minutes to run on each build agent. We're going to be expanding this to up to ~12 build agents once we bring our long-running performance tests and networking tests on Linux back into the mix.

When we're doing a lot of work on a new release, like we are now, it's not uncommon for us to have 4-5 pull requests all running builds at the same time. This means the rest of the .NET Foundation projects might have to wait at least hour before seeing any agents are available.

This is a trivial problem to solve for Microsoft, I suspect - but what it points to is Glenn's larger point, that maintaining the infrastructure needed to work up this maturity ladder is going to become an increasingly more demanding task on the .NET Foundation as:

  1. More projects start using the DNF infrastructure and
  2. More of those projects are larger than a single library and demand significantly more resources.

What are the DNF's plans for managing the infrastructure all of these projects are going to need in order to mature? Or, alternatively, why shouldn't the projects be able to use their own - so long as you can see the green checks on Github or Gitlab or Bitbucket, why should it matter?

@Aaronontheweb
Copy link

Add to this list, re: infrastructure:

  1. Authenticode signing services;
  2. Certificates for said signing;
  3. .NET Foundation CLA bots;
  4. Static / security analysis tools per the maturity ladder requirements;
  5. Hosting / deployment for websites; and
  6. Access control + ability to delegate to other project members for all of the above.

Today, in order for me to give another Akka.NET member the ability to setup CI for one of our sub-projects or plugins - is there a way I can do that without having to wait on a human being at the .NET Foundation?

What are the plans for scaling this?

@clairernovotny
Copy link
Member

On the infrastructure side, we're working with the Azure Pipelines team to ensure we have as many agents as we need to not cause a bottle-neck.

One of the challenges right now is that Pipelines doesn't have a good way to see queue times (average, peak) across the whole account, so we don't know when things are stuck until people tell us. If you start seeing long queue times, please post an issue here https://github.com/orgs/dotnet-foundation/teams/project-support, or email me/us.

@richlander
Copy link
Collaborator

@forki -- One thing that I already wrote on Twitter: it's perfectly fine for Microsoft to come up with a set of requirements that they give to the community and say: "look those are the things you need to do so that we can put your product into our stack". That would be a great thing if done by MS.

That is part of what this proposal is and part of the motivation. In 3.0, we removed all 3rd party dependencies from the product. We didn't want to that. It hurt us to do it. We did it because we have no model for servicing 3rd party dependencies or managing trust/quality. This experience was a partial influence for this proposal, for me. We want to use community packages and recommend them to big/conservative customers. We need a model for that, and this is that.

We could have just had L3 and L4 and called it good. Is that what you'd prefer?

@richlander how's you're experience been with the DNF infra team? Were they consulted during this proposal? Are they comfortable and got the resources to scale this?

I did not talk to them. I didn't even realize there was such a team. I (incorrectly/naively) assumed that DNF infra was fine. My mistake and I'm sorry about that.

@forki
Copy link

forki commented Sep 26, 2019

@richlander that's why I think it should be advertised as such and not as some that they whole community must/should adopt. Too many companies will just follow this blindly. I understand that there is desire to have it open and somewhat community driven. But let's be honest the final decision if something belongs into MS stack is done by MS lawyers. So this should not be something in .NET foundation. It should be something that MS states as their own minimum bar for adopting some external project.

@richlander
Copy link
Collaborator

But let's be honest the final decision if something belongs into MS stack is done by MS lawyers.

That isn't true. It was >5 years ago, but not now.

As I said earlier, we removed all the 3rd party dependencies from the product for 3.0. There was exactly zero conversations with lawyers as part of that. The product group decided that. If we move forward with this ladder and feel comfortable taking 3rd party dependencies again, then we will make that choice, and it is unlikely lawyers will be consulted for that, unless we have specific concerns where we need advice. If all the libraries we want to use have a compatible license, what advice would you expect lawyers to give us? Certainly not related to coding style or code coverage.

that's why I think it should be advertised as such and not as some that they whole community must/should adopt.

I only said that Microsoft being able to use/recommend projects was a partial inspiration/motivation. If that was the sole goal (which it definitely is not), then you are right that the proposal would have been different, but it still would have been public. Like I said earlier, it would have been just L3 and L4. If that had happened, I think people would be frustrated because the step function would have been too high, with no accompanying programs to help people along the way. That's what this proposal is intended to deliver ... much softer/earlier on-ramp into the ladder and more supporting programs. It's also more similar to how other foundations work, with sandbox and incubation programs.

@forki
Copy link

forki commented Sep 26, 2019 via email

@richlander
Copy link
Collaborator

Your question on JS (or other ecosystem) tech is answered here: #12 (comment). The short answer is that code from other ecosystems isn't considered.

But right now I don't see how it would ever be "trustworthy" in the eyes of
the foundation.

Why is that?

@jongalloway
Copy link
Collaborator

Today, in order for me to give another Akka.NET member the ability to setup CI for one of our sub-projects or plugins - is there a way I can do that without having to wait on a human being at the .NET Foundation?

What are the plans for scaling this?

This is a really good question, and I think worth spinning off as a separate issue so we can track it more effectively: #23

@jongalloway
Copy link
Collaborator

I try to see my self as generally a very positive person, with ambition to have patience and see long term goal. I don't believe in twitter rants (fail at times :) ) and rather diplomatically reach out to people first before raging in public. If I weren't that person then I would probably make statements like: the foundation is

  • Under funded
  • Under staffed
  • Over committed
  • Not transparent/communicating enough

and therefor adding more things to their plate, before solving above seems unwise.

This is a very clear problem statement, and I don't disagree. The plan has been to scale up by bringing on the project support action group, but documenting processes and getting them set up takes time, too. It's a classic startup problem of being too busy to get help, even when people are asking to help. I'll work to get a project support action group meeting set up over the next few weeks (hopefully next week) to both gauge interest and start getting that ramped up.

@richlander how's you're experience been with the DNF infra team? Were they consulted during this proposal? Are they comfortable and got the resources to scale this?

The DNF infra team is basically me and @onovotny. We've been handling everything since I got started 2.5 years ago. The CLA service is run by another team, but everything else is just us. One visibility issue is that the .NET team working at Microsoft have a lot of engineering support and don't rely on DNF. One potentially positive effect of both this proposal and general discussion is to create a more standardized ecosystem. We wanted to document what Microsoft's .NET dev team does and make it available to the community, and in doing that we'll also make these needs visible to that team.

I added #23 to specifically track the infrastructure scaling issue.

@forki
Copy link

forki commented Sep 27, 2019

Why is that?

Only considering the IL side of dependencies in a system that clearly has also security threats / maintenance challenges from other dependencies is a weird approach. "all dotnet deps must be Level 4 and we check it like crazy, but we don't care about leftpad or bitcoin miners as long as they are not in IL"

@jongalloway
Copy link
Collaborator

One thing that I already wrote on Twitter: it's perfectly fine for Microsoft to come up with a set of requirements that they give to the community and say: "look those are the things you need to do so that we can put your product into our stack". That would be a great thing if done by MS.

Putting the ladder into the foundation on the other hand makes it a thing that will spread to the whole community and will put pressure on projects that will never need to go into MS stack. People in companies will just blindly adopt it and now projects are called "untrustworthy" just because they don't align with a set of requirements that MS needs internally. That's not good

Very good point - spun this off to #27 to make sure we track it. Very open to suggestions on this.

@devlead
Copy link
Member

devlead commented Sep 27, 2019

The DNF infra team is basically me and @onovotny.

@jongalloway that's an scaling concern, if we're going to scale this to potentially all DNF and .NET projects we need a proper support infrastructure. It's not fair nor realistic to ask that of you and @onovotny.

Great that this is tracked in #23 👍

@HowardvanRooijen
Copy link

I thought I'd jump in and explain a little bit more about the IP Maturity Matrix (IMM), that @JamesRandall mentioned above.

To under the IMM, you need to understand a little about endjin, the company that @mwadams & I founded in 2010. We wanted to create a cloud-first consultancy that used Intellectual Property to enable a value-based pricing model rather than the more traditional T&M based one. The majority of our revenue comes from building .NET applications, APIs, data & AI solutions for customers hosted in Azure.

Over the past 9 years we have been very successful at creating and reusing IP, in fact we may now have too much to manage & maintain effectively. So we've recently decided to adopt an "open-source by default" policy and are in the process of migrating (while also updating, improving, refactoring and consolidating) all of our core IP onto GitHub (and nuget.org) from private repos in Azure DevOps.

Internally, we have a Director of Engineering (@jamesbroome) and Technical Fellow (@idg10) who are responsible for the governance of our technical processes and IP.

During our first discussions, we identified the following problems we face:

How do we:

  • host IP in GitHub?
  • set up CI / CD Pipelines for projects in GitHub?
  • license open source IP?
  • deal with Versioning, Packaging and Distribution?
  • manage dependencies in our OSS projects?
  • ensure OSS Contributors assign endjin their copyright for contributions?

We soon realised that the next set of problems we'd have to deal with were those of governance and adoption. How do we get everyone inside the company to understand, trust and use the IP we're open sourcing (rather than fall into standard Not Invented Here behaviours)?

We identified that our IP exist at 5 different levels of fidelity:

Level Type
0 Script / Template / Code file
1 Component
2 Tool
3 HTTP based APIs
4 Solution
5 Product

and that different "maturity" concerns were applicable at each level. We looked at some of the other maturity models out there and realised that many (like the Apache Maturity Model) were mainly geared towards "Level 5" IP.

We also realised that many of the maturity measures were already in, or were extensions of, our existing "definitions of done".

  • Shared Engineering Standards - We have developed a standardised set of configuration files for projects and IDEs. Using these creates a pit of quality for internal and external developers to follow.
  • Coding Standards - We have agreed upon coding standards for our primary languages, and in most languages these are enforced by linters or other code analysis tools.
  • Executable Specifications - Executable Specifications are our fundamental design tool; whether via Gherkin or OpenAPI, it creates a shared understanding of behaviour, and a common domain language.
  • Coverage - Code coverage should be used as a ancillary measure of how well we have written our Executable Specifications. We could expect a similar score across both categories. A discrepancy between scored requires further inspection & analysis.
  • Benchmarks - We are commonly asked to write high performance / low latency code; understanding the performance characteristics from a memory and CPU perspective is vital as production performance issues have big reputational impact.
  • Reference Documentation - Code should be self-documenting, but long term support of maintenance of code requires context and narrative as well as purpose.
  • Design & Implementation Documentation - One of our constant failings is under valuing the thought and effort that goes into creating solutions, and capturing all that knowledge in a format we can use to take customers along on the journey.
  • How-to Documentation - The use of IP can be nuanced. Effective documentation allows users to be self-starters. Questions should be captured as FAQs.
  • Demos - Understanding how to get started with our IP, or how different elements of IP can be used together is fundamental for increased productivity.
  • Date of Last IP Review - How recently IP was reviewed is a powerful code smell. We exist at the bleeding edge, which means that change is a constant. We need to be vigilant to change, especially when we depend on cloud PaaS services that can change beneath our feet.
  • Framework Version - The majority of our IP is based on the .NET Framework. This is now a rapidly evolving ecosystem. Staying up to date is no small feat.
  • Associated Work Items - The number of associated work items is another code smell; it can signal either chaos or order. We need to distinguish between these two states.
  • Source Code Availability - One of the most overlooked aspects of our use of IP is supporting our contractual clauses, allowing customers to access the source code for the binaries we use. Historically, we have been approached 5 years after the actual engagement as part of acquisition / due diligence processes.
  • License - A foundation of our commercial success is establishing the licensing of our IP.
  • Production Use - A good measure of the quality of our IP is how many times it is being used in a production environment by our customers.
  • Insights - When things go wrong, we need the infrastructure in place to help us quickly resolve the situation.
  • Packaging - It's one thing to create reusable IP, it's entirely another for it to be in a form that easy to find and use.
  • Deployment - The final hurdle is getting the IP into an environment where it can be used.
  • Ops - We need to consider other personas, whose function is to support the IP we create. How do we make their experience "delightful"?

Obviously there is one major category missing - that of Security. We decided to iterate upon the above, while doing some thinking and research into how to approach that topic.

Once we had the initial list of categories, we started thinking about how the governance process would work. How would we score each project? Could we generate metrics via tools, or were some of the categories subjective and needed qualified humans to make judgement calls? How would our governance team manage the ever growing number of repositories being created?

We spent a lot of time coming up with a scoring system; after a number of iterations we realised that there were only two types of scores (and there is probably better technical terms, but the elude me at the moment); discrete i.e. a measure can only be 0, 1, 2 or 3, or it can be continuous whereby the values can be cumulative (+1 Benchmarks which cover baseline performance, +1 Benchmarks which demonstrate failure conditions). For a list of the measures we've come up with look at the readme.

One of the next ideas was that we wanted to be able to, at a glance, understand the maturity of the code within a repository, and the idea of representing each of the maturity categories as a badge was born.

I created a proof of concept that not only encapsulated the all of the measures and scores, but also a first pass at a ruleset schema. The proof of concept is a simple rules engine hosted inside an Azure Function that can read a imm.yaml file from the root of a public repository and can either render a total score, or one of the categories as a badge.

Below are the examples from our AIS.NET project (If you get broken images it's because Azure Functions cold starts are the issue - refresh the page):

Total IMM Score:

IMM

IMM Categories:

Shared Engineering Standards

Coding Standards

Executable Specifications

Code Coverage

Benchmarks

Reference Documentation

Design & Implementation Documentation

How-to Documentation

Date of Last IP Review

Framework Version

Associated Work Items

Source Code Availability

License

Production Use

Insights

Packaging

Deployment

We've only rolled the above process out over the last two months, but it seems to be working. Now our weekly governance meeting have a focus - how can we improve the IMM scores across all our repositories?

A side note about the UK Microsoft Development Community & Communication with the .NET Foundation

In 2017 endjin went fully remote; removing 4 hours of commuting per day enabled me to start re-engaging with local user groups and free-to-access development conferences. It had been ~7 years since I'd last been actively involved (start-up life being what it is), but I was shocked at what I found. Most of the user groups were now solely funded by local recruitment agencies, with additional funds often contributed from the organisers own company (many seem to be self employed). I asked why they didn't have deeper relationship with Microsoft (as these were mainly Microsoft tech focused groups) and a common thread emerged. They used to have a good relationship with Microsoft but during the "Windows Mobile" period, financial support was only forthcoming if all the user group sessions focused on Windows Mobile, or the user group / event was rebranded as a Windows Mobile centric event. The organisers politely refused and their relationship with Microsoft ended there, and is as yet to be rekindled.

In 2018 we decided that this state of affairs wasn't acceptable and that organisations that either employ developers who benefit from these free-to-access events, or organisations that make their money from the Microsoft ecosystem should start to contribute towards the community. So we added £10,000 GBP to our budget in order to sponsor as many user groups and events as we could.

In 2018 we sponsored:

  • DDD Scotland
  • DDD Wales
  • DDD Reading
  • DDD South West
  • DDD East Anglia
  • DevEast
  • SQL Glasgow (now Data Scotland)
  • DotNet Sheffield

Most of the event sponsorship opportunities were based around promoting the sponsoring organisation from a recruitment perspective. We weren't interested in that, so when we could - we donated our "sponsorship stand" to Code Club, who sent a local representative to use the stand for their outreach activities.

This was a very insightful process. We were hugely impressed by the passion and commitment of the organisers. They were putting on fantastic events on a shoestring budget. But because of this limitation, higher inclusivity goals such as improving diversity & access, hardship bursaries, or childcare support were neglected. It's obvious to see that the organisers while all passionate developers, were not professional event managers. They would try and do fund raising and put on an event that would fit that budget, rather than working out how much it would take to include these "extras" and fund raise for that target.

It was obvious (to me with my business hat on) that these Microsoft-centric communities need a foundation that could help with legal matters, logistics, centralised procurement (preferred suppliers for public liability insurance etc), marketing, fundraising & outreach playbooks, guidance for codes of conducts, lists of volunteers with appropriate background checks (for working with children), payment systems for donations and sponsorship, centralised event management (or meetup.com subscriptions) for CFP, speaker selection, speaker profiles, tickets etc...

I was very excited when the .NET Foundation became more active in late 2018 / early 2019. I was particularly interested in the new corporate sponsorship as this felt like it could be a perfect vehicle for our yearly £10,000 GBP CSR fund. I emailed for details on how we could become "small company" sponsors on 20/03/2019. I didn't get a reply, so I emailed again 06/04/2019, and finally on 02/06/2019.

It's quite frustrating when you are repeatedly saying "take my money" but you can't even get a response. So while I applaud the fact that the tech giants (Google / AWS) have joined the .NET Foundation, there are tens of thousands of Microsoft partners (in the UK alone) and many, many more companies, and (millions of) developers who depend on, or derive value from the .NET community and could contribute financially, and those funds could then be used to support all the user groups and free-to-access events.

@jongalloway
Copy link
Collaborator

I was very excited when the .NET Foundation became more active in late 2018 / early 2019. I was particularly interested in the new corporate sponsorship as this felt like it could be a perfect vehicle for our yearly £10,000 GBP CSR fund. I emailed for details on how we could become "small company" sponsors on 20/03/2019. I didn't get a reply, so I emailed again 06/04/2019, and finally on 02/06/2019.

It's quite frustrating when you are repeatedly saying "take my money" but you can't even get a response. So while I applaud the fact that the tech giants (Google / AWS) have joined the .NET Foundation, there are tens of thousands of Microsoft partners (in the UK alone) and many, many more companies, and (millions of) developers who depend on, or derive value from the .NET community and could contribute financially, and those funds could then be used to support all the user groups and free-to-access events.

I'm very sorry about this, @HowardvanRooijen. I've set up a new [email protected] list including board members to prevent myself from being a single point of failure here, and will loop the team in on your latest e-mail. I think it's very important to have some sponsors come in at different tiers and am very thankful for your interest. Again, please accept my apologies and hope we can get this set up quickly.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests