Naujas 1400W hibridinis MPPT vėjo saulės įkrovimo ... - 2093592-001
I think the simplest way is to implement it in runtime, and just expose it via x/exp/maps. This would require a single method (AFAICT no hooks) to be exported via go:linkname from the runtime and into x/exp/maps (and you are correct, this would require importing unsafe in x/exp/maps).
All of this for the simpler case of shrink-on-delete1; the case of grow-on-read is even fuzzier and may not be solved for quite a while (especially considering the potential performance implications).
This proposal has been added to the active column of the proposals project and will now be reviewed at the weekly proposal review meetings. — rsc for the proposal review group
It is a good question. I remember when reviewing that change that it seemed applicable to me. Maybe Josh was seeing something I didn't.
Definitely agree that directly fixing the underlying issues would be better, but it's also true that the issues have been opened for years, and there seem to be no consensus on how to proceed (or even whether we can proceed).
I think this is a case of tuning a heuristic. We do at some point need to give up on the hint and start shrinking. (I guess the other option is to always respect the hint, but I think that overassumes accurate hints.) We may need to remember the hint for a period of time. But maybe that's not necessary. Maybe as long as inserts, or (inserts-deletes), are happening at a sufficient positive rate, we don't shrink.
Presumably the implementation of this would at least require unsafe and possibly some hooks into runtime as well, right?
Mind you, I am not saying these are insurmountable issues, or that this proposal is an actual alternative to solving them properly. I am just arguing that it has been 5+ years, and that maybe we shouldn't let perfect be the enemy of good (especially in case good won't hurt us in the future, as we would have an easy way to guarantee compatibility across versions).
I suspect we could deal with this the way inserts work. When we grow from capacity N to 2N, we do enough work at each insert that the grow work is guaranteed to be done by the time the number of entries reaches 2N. Similarly, when there are fewer than N/4 entries in an N-capacity map, we could shrink to capacity N/2. If we move at least 1 of the N/4 entries on each subsequent operation, then we're guaranteed we won't exceed our new N/2 capacity before the shrink is done. And if we move 2 entries per operation, and we could guarantee that the shrink is done before we'd want to shrink again (@ N/8 entries in our new N/2 capacity map).
And, while I haven't given this too much thought, I can imagine some tricky tradeoffs when dealing with how to deal with the case of a map that starts growing in the middle of a shrink operation.
Shrink-on-delete seems straightforward, we just need someone to implement it. Josh already checked in most of the mechanism needed for the tooManyOverflowBuckets change. "can" = yes, "how" = allocate a person to it. Grow-on-read seems harder, but I don't see any obvious blockers. We'd need a design sketch first, I think, before "can" is certain. Once we have that, "how" is again people-limited.
I propose to add, to partially address #20135 and #19094 (and the many external issues linked in those discussions), a Shrink function to x/exp/maps1 with the following semantics:
My interpretation of Josh's comment was the opposite, i.e. that the mechanisms needed for the tooManyOverflowBuckets were somewhat unlikely to help with #20135.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
That is a good point. We probably don't want to shrink if people do the map clear idiom + reinsert a new set of stuff. Perhaps we only start shrinking after 2N operations, so that N delete + N add involves no shrinking. There's definitely a tradeoff here between responsiveness when you use fewer entries and extra work that needs to be done to regrow.
It's going to hurt someone, I'm not too hung up on that. Performance optimization is seldom a win-win. I'm looking for something as good or better than an 80% win / 20% lose.
As of March 1 (O.S. February 18), when the Julian calendar acknowledged a leap day and the Gregorian calendar did not, the Julian calendar fell one day further behind, bringing the difference to 12 days until February 28 (O.S. February 16), 1900.
In the future, this may be potentially extended to consider memory pressure, or alternatively turned into a no-op if the runtime were to fully address both #20135 and #19094 (although even with those implemented, Shrink could still be used to handle the case of a map that receives no reads and writes).
1800 (MDCCC) was an exceptional common year starting on Wednesday of the Gregorian calendar and a leap year starting on Sunday of the Julian calendar, the 1800th year of the Common Era (CE) and Anno Domini (AD) designations, the 800th year of the 2nd millennium, the 100th and last year of the 18th century, and the 1st year of the 1800s decade. As of the start of 1800, the Gregorian calendar was 11 days ahead of the Julian calendar, which remained in localized use until 1923.
And, while I haven't given this too much thought, I can imagine some tricky tradeoffs when dealing with how to deal with the case of a map that starts growing in the middle of a shrink operation.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Currently the maps package has no package dependencies. Presumably the implementation of this would at least require unsafe and possibly some hooks into runtime as well, right?
As an initial implementation, Shrink would likely just check if any of the following conditions is true, in which case it would perform the appropriate action:
As an additional hurdle, mapclear currently does not change the bucket count. Would the fix for #20135 change this, even though #20138 explicitly argued against changing the bucket count?
that I am assuming would be implemented synchronously+incrementally, similarly to the way map growth is; some of these issues would go away if instead we used an asynchronous+incremental GC-mediated approach, closer to the way sync.Pool cleaning is done, but I suspect that would bring its own share of new issues ↩
On the topic of capacity, and intersecting with the previous point, what should the map do with a map with a high hint while it's growing? What if the hint was too large, and the map length never reaches anywhere close to that capacity hint?
Lastly, how do we plan to ensure that the fix for #20135 won't hurt workloads some people may care about, especially since there may be no available workaround for them?
Shrink-on-delete seems straightforward, we just need someone to implement it. Josh already checked in most of the mechanism needed for the tooManyOverflowBuckets change.
As an additional hurdle, mapclear currently does not change the bucket count. Would the fix for #20135 change this, even though #20138 explicitly argued against changing the bucket count?
Lastly, how do we plan to ensure that the fix for #20135 won't hurt workloads some people may care about, especially since there may be no available workaround for them?
This proposal should be seen as an effective intermediate step, as it offers a clean way forward even if at a later date we actually fix #20135 and/or #19094.
With the proposed addition, users could instead periodically simply call Shrink when the map is not in use to ensure that memory is not leaked. Note that this proposal also addresses the case in which a map stops receiving writes (or reads+writes) completely (that would not be covered by any existing or proposed built-in runtime mechanism, as they all require at least some operation to piggyback on).
On the topic of capacity, and intersecting with the previous point, what should the map do with a map with a high hint while it's growing? What if the hint was too large, and the map length never reaches anywhere close to that capacity hint?
Users today can either use Clone or perform a manual clone operation of an existing map, but they have no visibility over the internal state of the map, so they can not know whether doing so will actually decrease memory usage, or if they are just wasting CPU cycles. Users could approximate this by manually keeping track of the high watermark and comparing it to the current length of the map, but this 1) can be quite difficult to do, as all map writes need to be instrumented and 2) it still does not address the "map is growing" case.
My interpretation of #20135 (comment) was the opposite, i.e. that the mechanisms needed for the tooManyOverflowBuckets were somewhat unlikely to help with #20135.
I'm just advocating that we've long avoided adding API for situations where we could fix the issue without adding API. I understand that could lead to long delays in getting things implemented, but that's the tradeoff we have historically made. Top of mind recently, see #29951.
It sounds like we believe this need can be addressed with better tuning, rather than adding new API. That would suggest we should decline this proposal. Do I have that right?