Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: Swiss Table maps can double size multiple times when deleting/adding elements #70886

Open
thepudds opened this issue Dec 17, 2024 · 4 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Milestone

Comments

@thepudds
Copy link
Contributor

thepudds commented Dec 17, 2024

Go version

go version go1.24rc1

Output of go env in your module/workspace:

N/A

What did you do?

When repeatedly deleting and adding elements from a Swiss Table map but without increasing the count of elements, the map can grow multiple times (e.g., from 128 slots to 1024 slots in a ~30s test).

I think there is currently a simplification in the current implementation (compared to Abseil and the CockroachDB implementations) such that it is expected that some growth occurs in lieu of a same-sized grow or rehashing in place, but it seemed worth a tracking bug that tables can end up growing substantially larger.

Here's a sample test demonstrating this:
https://go.dev/play/p/RITVDebV5op?v=gotip

It's runnable on the playground, where it sometimes fails or passes, though the main intent is to run locally.

Using that test, here's a sample run that starts with a ~10% load (14 elements in a map with an underlying table size of 128), then loops 1M times deleting and adding a different element (while never going above 14 elements in the map). The map's underlying table grows from 128 slots to 512 slots while doing that delete/add cycle 1M times:

$ go1.24rc1 test -count=3 -v -loop=1000000 -run=TestTombstoneGrow/tableSize=128/elems=14
=== RUN   TestTombstoneGrow
=== RUN   TestTombstoneGrow/tableSize=128/elems=14/load=0.109
    main_test.go:33: before delete/add loop: len(m)=14, underlying table size=128, map=0xc00002b140
    table: growing: old size=128, new size=256, map=0xc00002b140
    table: growing: old size=256, new size=512, map=0xc00002b140
    main_test.go:53: [after delete/add loop]  len(m)=14, underlying table size=512, map=0xc00002b140
    main_test.go:56: got 2 allocations per run
--- FAIL: TestTombstoneGrow (0.34s)
    --- FAIL: TestTombstoneGrow/tableSize=128/elems=14/load=0.109 (0.34s)

Those results above include using a minor hack into the runtime to report the underlying table size and print when tables grow.

If we instead loop 100M times on that same test, the map grows from 128 table slots to 1024 table slots:

$ go1.24rc1 test -count=3 -v -loop=100000000 -run=TestTombstoneGrow/tableSize=128/elems=14
=== RUN   TestTombstoneGrow
=== RUN   TestTombstoneGrow/tableSize=128/elems=14/load=0.109
    main_test.go:33: before delete/add loop: len(m)=14, underlying table size=128, map=0xc00002b140
    table: growing: old size=128, new size=256, map=0xc00002b140
    table: growing: old size=256, new size=512, map=0xc00002b140
    table: growing: old size=512, new size=1024, map=0xc00002b140
    main_test.go:53: [after delete/add loop]  len(m)=14, underlying table size=1024, map=0xc00002b140
    main_test.go:56: got 2 allocations per run
--- FAIL: TestTombstoneGrow (33.86s)
    --- FAIL: TestTombstoneGrow/tableSize=128/elems=14/load=0.109 (33.86s)

If we just loop, say, 100 times, the table does not grow, as expected:

$ go1.24rc1 test -count=3 -v -loop=100 -run=TestTombstoneGrow/tableSize=128/elems=14
=== RUN   TestTombstoneGrow
=== RUN   TestTombstoneGrow/tableSize=128/elems=14/load=0.109
    main_test.go:33: before delete/add loop: len(m)=14, underlying table size=128, map=0xc00002b140
    main_test.go:53: [after delete/add loop]  len(m)=14, underlying table size=128, map=0xc00002b140
--- PASS: TestTombstoneGrow (0.00s)
    --- PASS: TestTombstoneGrow/tableSize=128/elems=14/load=0.109 (0.00s)

One note of caution regarding the accuracy of this as a bug report -- test pass/failure here is being reported using testing.AllocsPerRun to see if an alloc occurs, but either I'm holding it wrong or seems to be flakey or both. (I was purposefully not using a more conventional runs number like 100, but maybe that's a mistake).

CC @prattmic

What did you see happen?

8x memory used.

What did you expect to see?

Less than 8x. Using an extra ~2x memory might be OK as a near-term simplification, but 8x seems high, and the memory can grow further.

@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Dec 17, 2024
@gabyhelp
Copy link

Related Issues

Related Code Changes

(Emoji vote if this was helpful or unhelpful; more detailed feedback welcome in this discussion.)

@thepudds
Copy link
Contributor Author

(Ah, after stepping away, I realized my mistake with testing.AllocsPerRun. I was trying to set things up so that a "good" result was zero allocs, but that was a mistake, or at least, I did it incorrectly such that the t.Fatalf was not catching everything. Probably not super important to the main report here, but correcting that shows similar results, but now it reports a pass/fail more reliably I think: https://go.dev/play/p/xiWudCQADt5?v=gotip)

@prattmic
Copy link
Member

The rationale for leaving this as a TODO and not immediately adding a same size grow is that growth should be logarithmic. Each time the map grows from tombstones it gets harder and harder to create enough tombstones to grow again.

Luckily your test seems to agree with this. While I think we'll want same size grow eventually, IMO 100M iterations to grow from 128 to 1024 is not that bad. Or rather, it seems unlikely to cause significant problems in a real production application. But its definitely good to keep this open for reports if folks do run into issues.

@thepudds
Copy link
Contributor Author

thepudds commented Dec 18, 2024

Hi @prattmic, it does indeed get harder and harder, and it makes sense to have the focus be on real-world apps (including my understanding is that you've already seen the new code running very successfully against real-world apps).

Regarding the 100M loop example growing from 128 slots to 1024 slots, I'll just briefly note that for that one, I happened to pick a very low load of around 10% as the starting point.

If I do the same experiment starting at a table size of 128 but start instead at a ~43% load (which could hypothetically be just after a "natural" grow) or at ~87% load (just before a grow), then it takes around 100K delete/adds and 10K delete/adds respectively to have the tombstones push it from a 128 table size to 1024:

$ go1.24rc1 test -count=3 -v -loop=100000 -run=TestTombstoneGrow/tableSize=128/elems=56
=== RUN   TestTombstoneGrow/tableSize=128/elems=56/load=0.438
    [...]
    table: growing: old size=128, new size=256, map=0xc00002b1c0
    table: growing: old size=256, new size=512, map=0xc00002b1c0
    table: growing: old size=512, new size=1024, map=0xc00002b1c0
    main_test.go:59: after delete/add loop: len(m)=56, underlying table size=1024, map=0xc00002b1c0
    main_test.go:65: got 10 allocations per run
--- FAIL: TestTombstoneGrow (0.03s)
$ go1.24rc1 test -count=3 -v -loop=10000 -run=TestTombstoneGrow/tableSize=128/elems=112
=== RUN   TestTombstoneGrow/tableSize=128/elems=112/load=0.875
    [...]
    table: growing: old size=128, new size=256, map=0xc0000ba240
    table: growing: old size=256, new size=512, map=0xc0000ba240
    table: growing: old size=512, new size=1024, map=0xc0000ba240
    main_test.go:59: after delete/add loop: len(m)=112, underlying table size=1024, map=0xc0000ba240
    main_test.go:65: got 10 allocations per run
--- FAIL: TestTombstoneGrow (0.01s)

(I don't think that's a surprise result either -- higher starting load means the tombstones need to do less at the start to get their first grow).

In any event, makes sense to see how this compares against other possible future refinements. Thanks for taking a look!

@mknyszek mknyszek added this to the Backlog milestone Dec 18, 2024
@mknyszek mknyszek added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one.
Projects
Development

No branches or pull requests

5 participants