-
Notifications
You must be signed in to change notification settings - Fork 8
Feedback on the proposed model #9
Comments
This is excellent feedback. Thanks for taking the time to write it up. Judging by the "reactions", it resonates with other folks, too. Request: Can you number your points? Once you do, I'll provide feedback. The numbering will make it tremendously easier to answer and for readers to follow along. Tip (in the case you don't know): if you just put "1. " for all the list items, the markdown renderer will number the list correctly. |
No problem, happy to provide feedback. I've added numbers - hope that helps. |
One comment I would make is that the whole model seems to be barrier-based rather than accumulative. For example, and relating to point 4 above, instead of requiring that a project only depends on Lev 2 and above to be Lev 3, this could be recast as a positive for a Lev 1 project that has accrued Lev 3 dependencies. Clearly there may be some "veto-like" criteria, but I think an accumulative, positive model is generally better for encouraging participation and openness. I am also curious as to why it has not taken a more obviously risk-based approach, because that is really the question that this kind of maturity model is trying to address. What is the risk.of taking a dependency on this project? And of course, that risk varies on different axes. |
Being viral (a trusted project can only depend on trust projects) is necessary. Imagine:
A chain is only as strong as its weakest link. |
But any of those links may be the weakest. E.g. No dependency should be trusted, and a more robust maturity criterion to address this risk would be the validation process for upstream dependency updates. The overall maturity of the dependency has little impact on the risk in this area, and the maturity of the security processes in the consuming project are much more significant. Hence the reason why I think this should be reversed, and the fact that a lower level project is depended on by a higher level project (and subject to its stringent security review) should count positively to it, not negatively on the consumer. (As a 'for example') |
I agree that there is a significant requirement/burden today on the consumer to really validate that everything is safe. The goal is cut that burden down to a more manageable size for consumers so that using OSS isn't so expensive and the safety so undefined.
W/o a trusted security process, it is unbounded (modulo the permissions level that the app runs at: admin, standard user, ...). Maybe I'm not thinking about it correctly, but I cannot see how the accumulative model would work, why/why taking a lower-level project dependency should be counted as positive and what is the fundamental value that the accumulative model delivers. Meaning, I don't understand. One thing that is out of scope of the proposal, but is in scope for software engineering, is that you should take the fewest dependencies possible. This makes it easier to reason about software more readily across a variety of dimensions. |
I think that what I am saying is that some kind of point in time review by authority across all axes to gain admittance to a one-size-fits-all maturity club is necessarily less effective than an accumulation of confidence in those axes as determined by the consumers of those projects and their relative level of competence in that axis. It also allows for a less prescriptive approach to any given axis, which will tend to stifle innovation. |
Answers to the questions ...
Note: I will update my answers as needed, and keep them to this issue entry, even though the conversation might keep on moving forward. |
@mwadams -- is the Apache model more desirable to you? https://community.apache.org/apache-way/apache-project-maturity-model.html ... It defines specifics, but no progression. Or you want something entirely different again? I cannot quite tell. |
@JamesNK this proposal does a limited amount to address that as trust in a project, particularly its less code based aspects, driven by a review process is point in time based, the point in time being at the point of review (hence some of my questions about ongoing review). As a maintainer I could be approved as level 3 then immediately (deliberately or otherwise) take actions to invalidate that. If I’m genuinely concerned about such things as a consumer I still need to review each dependency (and it’s policies etc) each time I take a version of it to assure myself things are as they were when last reviewed by the assessing panel. Of course this isn’t a hard 1 or 0 type question - it’s a matter of making value calls around risk mitigation and some of it comes down to building trust in people over time. Ongoing review can mitigate it but that would need a committed cadence so that people making those risk calls could do so in an informed way. As an aside but somewhat relayed as again the focus seems to be here: there’s a lot of focus on the consumer of projects throughout this proposal (and already in this thread) and not a lot on authors and contributors. If this does take off and the viral network effects kick in it is likely to add pressure to folk who often already struggle with the demands of maintaining packages - by the time you get to level 3 and 4 what is effectively being asked is for a project to be managed on a very professional basis for the benefit of consumers who are highly concerned with the risks this maturity model addresses - largely commercial entities. When OSS is developed by commercial entities this isn’t really an issue but I must admit to being concerned about the impact on the many people who do this on the side / as a hobby. Funding for OSS is not in a good way in general and I can’t help think that if we’re going to look to professionalise the OSS space that has to go hand in hand with looking and improving those aspects of it. it’s late on the UK. Thoughts maybe better formed tomorrow. |
@richlander - thanks for the detailed reply as it’s late in the Uk I’ll go through it properly tomorrow. However I’ll quickly chip in: without regular re-review it’s hard to see the value this provides to those looking to use the model to address risk in package adoption. It’s not uncommon in such environments to adopt a cadence for rereview of dependencies and risks. If there’s no regular rereview going on of a projects maturity level then the maturity model ratings value diminishes over time (half life somewhat dependent on the consumers sensitivity). And I’d argue the appearance of a badge on a projects GitHub page could give a false impression of a projects current status - and that of all its dependencies. |
In fact thinking about it if you don’t regularly rereview projects what you are actually doing is reviewing people as well as projects. More so I would argue. When you assign the rating you’re essentially saying that “we’ve verified this project is at level n on date yyyy/mm/dd and we trust the maintainer(s) to inform us if anything effects that positively or negatively”. |
What policies are you worried about projects breaking after they been certified? If it is dependencies, then there are tools for automatically analyzing dependencies that a repo uses. @richlander, that might be something to consider. Authors were considered. That is why there is a sliding scale of 4 levels rather than an all or nothing approach. And the .NET Foundation should take care of anything that requires buying something (e.g. a certificate package and Authenticode signing). If you think some of the requirements are particularly difficult then you should be more specific with feedback on them. Make a case for why they should be moved to a higher level, or that they aren't important and should be removed. |
One of the big issues in the past with the DotNetFoundation is we've had problems with communicating with the DNF admin and also don't hear much feedback from the team members. We've also had technical problems that were easily fixed but due to these communication issues went 8-12months without resolution (eg we had Azure DevOps issues and authentication issues where I was the only one able to approve builds for 12 months). Some of these issues still exist but are way less impact then they were 12 months ago. Since a lot of these levels rely on having support from the DotNetFoundation and given the past experiences, it'd be nice if a lot of the more undefined policies would be also considered sooner rather than later. Some of the communication issues have been resolved but not all. @devlead on a weekly basis for examples suggests getting communication from the board members out there in the form of videos etc or similar mechanisms. For example the communication issues even exist in regards to these policies given we had no communication with a lot of project maintainers finding out through a tweet rather than through internal DotNetFoundation project leaders communication mechanisms. I think from our project it's not so much the requirements that are a issue but if the DNF processes can handle it. |
@JamesNK - will happily do so tomorrow (as indicated). In the meantime perhaps expand on some of your own comments - for example how were authors considered? And perhaps remember that what those of us providing feedback can see is an output - not a thought process. Edit: I’d certainly support automated verification where possible. |
So a couple of quick examples just from level 1 of things that are time sensitive and could be difficult to automate: Roadmap documentation - is it still being maintained? Is it up to date? If you move on through level 3 there are others and they are essentially the more subjective “soft” parts of the project rather than the code. It might be possible to remove the subjectivity by changing the wording. For example the point about fixing issues and encouraging contribution could become: Issues are processed within a month of logging Then you can measure it. But does that still have sufficient value? To give some context when I read the maturity model and have commented here I’m thinking about the projects - not packages and code alone. The maturity of a project when looked at holistically changes over time (OSS or otherwise) and it’s as much about people as it is code. I don’t believe the assessment of a maturity of a project can be entirely automated (though some aspects can be and signals can be derived even for the soft parts of a project but to do that in the OSS world with all its variety would be a challenge to say the least). And point in time assessments of systems that change over time have limited, diminishing over time, value. With regards to the wide gap between level 1 and 3 one approach might be to take a more broken out or scored system rather than tiered. Score + colour maybe? For example @endjin have one here for their own projects https://github.com/endjin/Endjin.Ip.Maturity.Matrix (no affiliation other than knowing @HowardvanRooijen on Twitter). |
Absolutely. That's my thinking, too.
Agreed. I said in my long answer about that we wouldn't re-review. My real underlying thinking is that we shouldn't only rely on the Foundation for review (Foundation is largely volunteers, too). I agree with you that we should have a review system, but think about the role of the foundation and community within that. There are a few models (that I can think of to consider):
There are various levers here to play with. For example, the candence for L1/2 and L3/4 projects don't need to be the same. Could be twice a year for L1/2 and four times a year for L3/4. Which do you prefer? Can you think of some others?
Boiling everything down to a number objective (for example, process issues within a month) is an interesting idea. I am hesitant to go that direction, at least initially, because that's something I would expect significant maintainer resistance to (and I would be entirely sympathetic). I can see how there is a benefit to consumers by having tighter SLAs, but I didn't view it as the most critical thing to put in place for consumers and didn't expect overwhelming acceptance from maintainers.
That system is really interesting. I'm wondering if we should do both. On one hand, it is crazy to have more than one scheme. On the other, the two models are trying to achieve different things. The key aspect to the ladder is a prescribed progression. It's a little bit like going to University. A degree program defines a prescribed progression of courses and you end up as a software engineer or an accountant. Anyone looking at your degree knows exactly what that means and they consider hiring you. This is as opposed to just +1ing on easy or interesting (to you) courses for four years with the expectation that you end up with a valuable degree that has meaning to others (hint: it won't). So, we could break out pure quality as a separate concept and make the scoring system. Clearly, more thought is required on that.
This is a very real issue. The interest in dual licensing is very related. I have a few thoughts on this topic (some of which @JamesNK covered already).
I didn't address every single one of your points this time, but I think I did cover the bulk of it. Tell me if there is anything I missed that you feel needs discussion. |
@glennawatson -- this is the first I've heard (except for you telling me earlier in the day) about DNF infra not delivering on its promise. I will directly follow up on that. This is a big concern for me because the proposal definitely relies on DNF infra that works super well and makes you happy to use it. |
I have emails going back to December 2018 that are unanswered from the foundation. Geoffrey Huntley for example attempted to cc on some issues those also went unanswered. @devlead has become our unofficial leader in terms of doing communication. He setup with Jon a leadership Slack to try and mitigate our issues. His team help out the dnf a lot doing the newsletter admin. Even he's getting frustrated with the lack of movement in terms of board communication. The leadership Slack and the GitHub teams have made this better than it was but @devlead is doing prompting about suggesting the board get out there and advertise continuously. These policies seem to rely on the dnf communication and infrastructure running at peak efficiency which they haven't always. |
Also one thing to note. ReactiveUI was the first project to go fully in on dnf infrastructure unlike very large orgs like cake and prism. So we been seen as these annoying people who keep complaining when others weren't but it's mostly due to being the largest project using the dnf infrastructure |
Also worth noting that my messages sound all doom and gloom about the dnf. It has improved and come a really long way just there are just those teething issues that these policies should have some way of addressing for either party. |
One thing that I already wrote on Twitter: it's perfectly fine for Microsoft to come up with a set of requirements that they give to the community and say: "look those are the things you need to do so that we can put your product into our stack". That would be a great thing if done by MS. Putting the ladder into the foundation on the other hand makes it a thing that will spread to the whole community and will put pressure on projects that will never need to go into MS stack. People in companies will just blindly adopt it and now projects are called "untrustworthy" just because they don't align with a set of requirements that MS needs internally. That's not good |
So, I'm too tired and got too much on my plate to be constructive ATM, so I'll try to keep it short. I try to see my self as generally a very positive person, with ambition to have patience and see long term goal. I don't believe in twitter rants (fail at times :) ) and rather diplomatically reach out to people first before raging in public. If I weren't that person then I would probably make statements like: the foundation is
and therefor adding more things to their plate, before solving above seems unwise.
@glennawatson Kinda feel giving me a bit too much credit... but I've tried to express feedback in a constructive way, assist projects, setup skype meetings and express issues and possible solutions when I've met @jongalloway and previously @martinwoodward in person.
@richlander how's you're experience been with the DNF infra team? Were they consulted during this proposal? Are they comfortable and got the resources to scale this? |
Have more thoughts about the proposal as a whole, but I wanted to respond to @glennawatson 's point about the existing .NET Foundation infrastructure:
So I feel a bit bad about this - the Akka.NET project, every time we receive a PR, kicks off 8 build agents in parallel to run our rather complex builds (large test suite that needs to be run on many platforms, including long-running network fault tolerance tests.) Some builds take as little as 2 minutes per agents. Others, like a full build as a result of a root dependency being modified, kick off jobs that take as long as 90 minutes to run on each build agent. We're going to be expanding this to up to ~12 build agents once we bring our long-running performance tests and networking tests on Linux back into the mix. When we're doing a lot of work on a new release, like we are now, it's not uncommon for us to have 4-5 pull requests all running builds at the same time. This means the rest of the .NET Foundation projects might have to wait at least hour before seeing any agents are available. This is a trivial problem to solve for Microsoft, I suspect - but what it points to is Glenn's larger point, that maintaining the infrastructure needed to work up this maturity ladder is going to become an increasingly more demanding task on the .NET Foundation as:
What are the DNF's plans for managing the infrastructure all of these projects are going to need in order to mature? Or, alternatively, why shouldn't the projects be able to use their own - so long as you can see the green checks on Github or Gitlab or Bitbucket, why should it matter? |
Add to this list, re: infrastructure:
Today, in order for me to give another Akka.NET member the ability to setup CI for one of our sub-projects or plugins - is there a way I can do that without having to wait on a human being at the .NET Foundation? What are the plans for scaling this? |
On the infrastructure side, we're working with the Azure Pipelines team to ensure we have as many agents as we need to not cause a bottle-neck. One of the challenges right now is that Pipelines doesn't have a good way to see queue times (average, peak) across the whole account, so we don't know when things are stuck until people tell us. If you start seeing long queue times, please post an issue here https://github.com/orgs/dotnet-foundation/teams/project-support, or email me/us. |
That is part of what this proposal is and part of the motivation. In 3.0, we removed all 3rd party dependencies from the product. We didn't want to that. It hurt us to do it. We did it because we have no model for servicing 3rd party dependencies or managing trust/quality. This experience was a partial influence for this proposal, for me. We want to use community packages and recommend them to big/conservative customers. We need a model for that, and this is that. We could have just had L3 and L4 and called it good. Is that what you'd prefer?
I did not talk to them. I didn't even realize there was such a team. I (incorrectly/naively) assumed that DNF infra was fine. My mistake and I'm sorry about that. |
@richlander that's why I think it should be advertised as such and not as some that they whole community must/should adopt. Too many companies will just follow this blindly. I understand that there is desire to have it open and somewhat community driven. But let's be honest the final decision if something belongs into MS stack is done by MS lawyers. So this should not be something in .NET foundation. It should be something that MS states as their own minimum bar for adopting some external project. |
That isn't true. It was >5 years ago, but not now. As I said earlier, we removed all the 3rd party dependencies from the product for 3.0. There was exactly zero conversations with lawyers as part of that. The product group decided that. If we move forward with this ladder and feel comfortable taking 3rd party dependencies again, then we will make that choice, and it is unlikely lawyers will be consulted for that, unless we have specific concerns where we need advice. If all the libraries we want to use have a compatible license, what advice would you expect lawyers to give us? Certainly not related to coding style or code coverage.
I only said that Microsoft being able to use/recommend projects was a partial inspiration/motivation. If that was the sole goal (which it definitely is not), then you are right that the proposal would have been different, but it still would have been public. Like I said earlier, it would have been just L3 and L4. If that had happened, I think people would be frustrated because the step function would have been too high, with no accompanying programs to help people along the way. That's what this proposal is intended to deliver ... much softer/earlier on-ramp into the ladder and more supporting programs. It's also more similar to how other foundations work, with sandbox and incubation programs. |
I'm not saying the requirements for MS stack shouldn't be public. They
definitely should be. I just don't think it's a good idea to extent the
same standards to the community.
Those things need to evolve independently.
That said from a practical point of view I'd like to see more clarification
how projects like SAFE-Stack can fit into the ladder. It's a big pile of
server side /Azure side asp.net core libs that may get into the ladder and
equally large pile of Javascript ecosystem tech. Npm, yarn, node, react,
babel, ...
The dotnet projects could potentially apply to get into foundation. What
about js side?
SAFE-Stack is a trusted stack. With companies providing training,
consulting,...
But right now I don't see how it would ever be "trustworthy" in the eyes of
the foundation.
Rich Lander <[email protected]> schrieb am Do., 26. Sep. 2019, 18:27:
… But let's be honest the final decision if something belongs into MS stack
is done by MS lawyers.
That isn't true. It was >5 years ago, but not now.
As I said earlier, we removed all the 3rd party dependencies from the
product for 3.0. There was exactly zero conversations with lawyers as part
of that. The product group decided that. If we move forward with this
ladder and feel comfortable taking 3rd party dependencies again, then we
will make that choice, and it is unlikely lawyers will be consulted for
that, unless we have specific concerns where we need advice. If all the
libraries we want to use have a compatible license, what advice would you
expect lawyers to give us? Certainly not related to coding style or code
coverage.
that's why I think it should be advertised as such and not as some that
they whole community must/should adopt.
I only said that Microsoft being able to use/recommend projects was a
partial inspiration/motivation. If that was the sole goal (which it
definitely is not), then you are right that the proposal would have been
different, but it still would have been public. Like I said earlier, it
would have been just L3 and L4. If that had happened, I think people would
be frustrated because the step function would have been too high, with no
accompanying programs to help people along the way. That's what this
proposal is intended to deliver ... much softer/earlier on-ramp into the
ladder and more supporting programs. It's also more similar to how other
foundations work, with sandbox and incubation programs.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#9?email_source=notifications&email_token=AAAOANGLOPKYQZY3PRYQU2LQLTPGXA5CNFSM4I2KXHNKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7WFNMY#issuecomment-535582387>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAOANCNONXA7EUYDOCGGKTQLTPGXANCNFSM4I2KXHNA>
.
|
Your question on JS (or other ecosystem) tech is answered here: #12 (comment). The short answer is that code from other ecosystems isn't considered.
Why is that? |
This is a really good question, and I think worth spinning off as a separate issue so we can track it more effectively: #23 |
This is a very clear problem statement, and I don't disagree. The plan has been to scale up by bringing on the project support action group, but documenting processes and getting them set up takes time, too. It's a classic startup problem of being too busy to get help, even when people are asking to help. I'll work to get a project support action group meeting set up over the next few weeks (hopefully next week) to both gauge interest and start getting that ramped up.
The DNF infra team is basically me and @onovotny. We've been handling everything since I got started 2.5 years ago. The CLA service is run by another team, but everything else is just us. One visibility issue is that the .NET team working at Microsoft have a lot of engineering support and don't rely on DNF. One potentially positive effect of both this proposal and general discussion is to create a more standardized ecosystem. We wanted to document what Microsoft's .NET dev team does and make it available to the community, and in doing that we'll also make these needs visible to that team. I added #23 to specifically track the infrastructure scaling issue. |
Only considering the IL side of dependencies in a system that clearly has also security threats / maintenance challenges from other dependencies is a weird approach. "all dotnet deps must be Level 4 and we check it like crazy, but we don't care about leftpad or bitcoin miners as long as they are not in IL" |
Very good point - spun this off to #27 to make sure we track it. Very open to suggestions on this. |
@jongalloway that's an scaling concern, if we're going to scale this to potentially all DNF and .NET projects we need a proper support infrastructure. It's not fair nor realistic to ask that of you and @onovotny. Great that this is tracked in #23 👍 |
I thought I'd jump in and explain a little bit more about the IP Maturity Matrix (IMM), that @JamesRandall mentioned above. To under the IMM, you need to understand a little about endjin, the company that @mwadams & I founded in 2010. We wanted to create a cloud-first consultancy that used Intellectual Property to enable a value-based pricing model rather than the more traditional T&M based one. The majority of our revenue comes from building .NET applications, APIs, data & AI solutions for customers hosted in Azure. Over the past 9 years we have been very successful at creating and reusing IP, in fact we may now have too much to manage & maintain effectively. So we've recently decided to adopt an "open-source by default" policy and are in the process of migrating (while also updating, improving, refactoring and consolidating) all of our core IP onto GitHub (and nuget.org) from private repos in Azure DevOps. Internally, we have a Director of Engineering (@jamesbroome) and Technical Fellow (@idg10) who are responsible for the governance of our technical processes and IP. During our first discussions, we identified the following problems we face: How do we:
We soon realised that the next set of problems we'd have to deal with were those of governance and adoption. How do we get everyone inside the company to understand, trust and use the IP we're open sourcing (rather than fall into standard Not Invented Here behaviours)? We identified that our IP exist at 5 different levels of fidelity:
and that different "maturity" concerns were applicable at each level. We looked at some of the other maturity models out there and realised that many (like the Apache Maturity Model) were mainly geared towards "Level 5" IP. We also realised that many of the maturity measures were already in, or were extensions of, our existing "definitions of done".
Obviously there is one major category missing - that of Security. We decided to iterate upon the above, while doing some thinking and research into how to approach that topic. Once we had the initial list of categories, we started thinking about how the governance process would work. How would we score each project? Could we generate metrics via tools, or were some of the categories subjective and needed qualified humans to make judgement calls? How would our governance team manage the ever growing number of repositories being created? We spent a lot of time coming up with a scoring system; after a number of iterations we realised that there were only two types of scores (and there is probably better technical terms, but the elude me at the moment); discrete i.e. a measure can only be 0, 1, 2 or 3, or it can be continuous whereby the values can be cumulative (+1 Benchmarks which cover baseline performance, +1 Benchmarks which demonstrate failure conditions). For a list of the measures we've come up with look at the readme. One of the next ideas was that we wanted to be able to, at a glance, understand the maturity of the code within a repository, and the idea of representing each of the maturity categories as a badge was born. I created a proof of concept that not only encapsulated the all of the measures and scores, but also a first pass at a ruleset schema. The proof of concept is a simple rules engine hosted inside an Azure Function that can read a Below are the examples from our AIS.NET project (If you get broken images it's because Azure Functions cold starts are the issue - refresh the page): Total IMM Score: IMM Categories: We've only rolled the above process out over the last two months, but it seems to be working. Now our weekly governance meeting have a focus - how can we improve the IMM scores across all our repositories? A side note about the UK Microsoft Development Community & Communication with the .NET FoundationIn 2017 endjin went fully remote; removing 4 hours of commuting per day enabled me to start re-engaging with local user groups and free-to-access development conferences. It had been ~7 years since I'd last been actively involved (start-up life being what it is), but I was shocked at what I found. Most of the user groups were now solely funded by local recruitment agencies, with additional funds often contributed from the organisers own company (many seem to be self employed). I asked why they didn't have deeper relationship with Microsoft (as these were mainly Microsoft tech focused groups) and a common thread emerged. They used to have a good relationship with Microsoft but during the "Windows Mobile" period, financial support was only forthcoming if all the user group sessions focused on Windows Mobile, or the user group / event was rebranded as a Windows Mobile centric event. The organisers politely refused and their relationship with Microsoft ended there, and is as yet to be rekindled. In 2018 we decided that this state of affairs wasn't acceptable and that organisations that either employ developers who benefit from these free-to-access events, or organisations that make their money from the Microsoft ecosystem should start to contribute towards the community. So we added £10,000 GBP to our budget in order to sponsor as many user groups and events as we could. In 2018 we sponsored:
Most of the event sponsorship opportunities were based around promoting the sponsoring organisation from a recruitment perspective. We weren't interested in that, so when we could - we donated our "sponsorship stand" to Code Club, who sent a local representative to use the stand for their outreach activities. This was a very insightful process. We were hugely impressed by the passion and commitment of the organisers. They were putting on fantastic events on a shoestring budget. But because of this limitation, higher inclusivity goals such as improving diversity & access, hardship bursaries, or childcare support were neglected. It's obvious to see that the organisers while all passionate developers, were not professional event managers. They would try and do fund raising and put on an event that would fit that budget, rather than working out how much it would take to include these "extras" and fund raise for that target. It was obvious (to me with my business hat on) that these Microsoft-centric communities need a foundation that could help with legal matters, logistics, centralised procurement (preferred suppliers for public liability insurance etc), marketing, fundraising & outreach playbooks, guidance for codes of conducts, lists of volunteers with appropriate background checks (for working with children), payment systems for donations and sponsorship, centralised event management (or meetup.com subscriptions) for CFP, speaker selection, speaker profiles, tickets etc... I was very excited when the .NET Foundation became more active in late 2018 / early 2019. I was particularly interested in the new corporate sponsorship as this felt like it could be a perfect vehicle for our yearly £10,000 GBP CSR fund. I emailed for details on how we could become "small company" sponsors on 20/03/2019. I didn't get a reply, so I emailed again 06/04/2019, and finally on 02/06/2019. It's quite frustrating when you are repeatedly saying "take my money" but you can't even get a response. So while I applaud the fact that the tech giants (Google / AWS) have joined the .NET Foundation, there are tens of thousands of Microsoft partners (in the UK alone) and many, many more companies, and (millions of) developers who depend on, or derive value from the .NET community and could contribute financially, and those funds could then be used to support all the user groups and free-to-access events. |
I'm very sorry about this, @HowardvanRooijen. I've set up a new [email protected] list including board members to prevent myself from being a single point of failure here, and will loop the team in on your latest e-mail. I think it's very important to have some sponsors come in at different tiers and am very thankful for your interest. Again, please accept my apologies and hope we can get this set up quickly. |
Issues
Questions
General
The text was updated successfully, but these errors were encountered: