spanner: unflake TestStressSessionPool
Even though the test shuts down the health checker, which again shuts
down the maintainer, there is a possibility that the maintainer was
working on replenishing the pool exactly at the moment that a test has
ended. This could cause a flaky failure, as the mutex was not locked
while dumping the sessions from the pool into local maps. These maps
are then compared to the contents of the session pool a little later,
and the session pool could have grown slightly in the meantime.
Fixes #1778.
Change-Id: I65e39006978681b2cbef02495c5805c1e0025c83
Reviewed-on: https://code-review.googlesource.com/c/gocloud/+/52170
Reviewed-by: kokoro <noreply+kokoro@google.com>
Reviewed-by: Hengfeng Li <hengfeng@google.com>
diff --git a/spanner/session_test.go b/spanner/session_test.go
index 6516530..f4521d4 100644
--- a/spanner/session_test.go
+++ b/spanner/session_test.go
@@ -1311,6 +1311,7 @@
sp.hc.close()
// Here the states of healthchecker, session pool and mockclient are
// stable.
+ sp.mu.Lock()
idleSessions := map[string]bool{}
hcSessions := map[string]bool{}
mockSessions := server.TestSpanner.DumpSessions()
@@ -1329,7 +1330,6 @@
}
idleSessions[s.getID()] = true
}
- sp.mu.Lock()
if int(sp.numOpened) != len(idleSessions) {
t.Fatalf("%v: number of opened sessions (%v) != number of idle sessions (%v)", ti, sp.numOpened, len(idleSessions))
}