Provides test results references for
- run-m : start from 2 empty dbs, performs all tests, in mini mode
(1 instance of each object)
- run-en : the dbs are suposed to have been populated by running -p -n
that is with normal size, like a handful of each object
- run-eb : idem in big mode, a few hundreds of each type -
like the public PL
- run-eh : idem in huge mode, a few thousands of each type -
1st mode is rather complete and convenient, just wipe off both dbs,
everything is done remotely from the test node
other modes are more tedious to run,
I usually run the populate phase locally, save the db dumps, and reinstall later on
otherwise it's not workable
they provide good indications of performance for
(*) RefreshPeer from scratch
(*) RefreshPeer in usual mode (no change done)
(*) GetSlivers()
a rough indication in huge mode:
each plc gets populated with
1000 sites, 2000 persons, 3000 nodes & 2000 slices
1 keys/person, 3 nodes/slice & 3 persons/slice
- 1st refresh peer
all : 407 s
xmit: 25 s
proc: 383 s
- 2nd refresh peer
all : 42 s
xmit: 25 s
proc: 17 s
note that updating slice attributes is still not optimized wrt sync operations,
reasonably processing time could be improved to less than 10s.
- GetSlivers ()
536 s just for getting the result from the peer, no processing