3 (*) one master test machine (abbreved 'test') - this is where you
4 trigger everything from
5 (*) two servers with myplc installed (abbreved 'plc1' and 'plc2')
6 at this stage the myplc configuration needs to be done by hand
7 you also need root ssh access from the test box to the two plc boxes in both boxes
9 -- configuring the plc servers:
10 (*) set their names in the Makefile (PLC1 and PLC2)
11 (*) edit TestPeers.py too -- xxx should be improved
13 To some extent this stuff can be ran to control a single plc.
16 right now, 4 tests sizes are supported, they are named
17 * m (minimal) 1 object of each kind is created
18 * n (normal) a few of each kind
19 * b (big) a few hundreds
20 * h (huge) a few thousands
22 two modes are supported:
24 (1) single run mode: everything is ran from the 'test' box
25 this is a convenient, but very slow mode, especially for large sizes
26 because each individual object is created in a single xmlrpc command
27 >>> xxx btw, I did not know about begin/commit at that time, but I do not
28 think it can speed up to the point where the second mode does
29 In this mode, both DB's are populated together, and various checks can
32 (2) populate-and-run mode
33 where the DB populating parts are done beforehand, and separately,
34 from each plc's chroot jail for direct access to the DB
35 This mode allows for dump&restore of the populated DB
39 (*) manually install both myplc, do the configs manually (once and for
40 good, see upgrades below)
43 - check Makefile and TestPeers.py for correct identification of
47 so the local repository gets synced on all nodes.
49 (*) get and push peers information
51 (at least once) so the gpg and other authentication materials get pushed to both nodes
55 IMPORTANT: for cleaning up previous tests if needed
58 => cleans up both databases if needed
59 <<< xxx this uses the initscripts to perform the job, it's a bit slow on
60 old boxes, could probably be improved by snapshotting the db right
61 after plc gets started>>>
67 $ make testpeers-m.run
68 assumes the dbs are clean, and runs the test locally
70 $ make testpeers-b.all
71 cleans both dbs and runs .run
73 $ make testpeers-m.diff
74 checks the output agaist the .ref file that chould be under subversion
76 $ make testpeers-m.ckp
77 adopts the current .out as a reference for that test - does *not*
78 commit under subversion, just copies the .out into .ref
85 cleans both dbs, cleans any former result, then runs .init and .run
87 $ make populate-b.init
88 assumes both dbs are clean, populates both databases and dumps database in the .sql files
91 performs the populate_end part of the test from the populated database
93 $ make populate-b.restore
94 restores the database from the stage where it just populated
96 $ make populate-b.clean
97 cleans everything except the sql files
99 $ make populate-b.sqlclean
100 cleans the .sql files
102 ==== various utilities
104 (*) upgrading both plcs (xxx could be improved):
105 - log as root on both plc servers and manually curl the myplc rpm you
106 want, in /root/new_plc_api/tests
107 - back to the server node:
110 if you want to upgrade only one, of course just use targets upgrade.1
113 (*) cleaning the database
114 $ make db-clean.1 (or .2 or .3)
118 ==== implementation notes
119 I've designed this thing so everything can be invoked from the test
120 server, even when things need to be actually triggered from a chroot
122 For this reason the same pieces of code (namely Makefile and
123 TestPeers.py) needs to be accessible from
125 (*) root's homedir on both plc's, namely in /root/new_plc_api/tests
126 (*) chroot jail on both plc's, namely in /plc/root/usr/share/plc_api/tests
128 This is where the 'push' target comes in
129 xxx at this stage the push target pushes the whole API, because I used this
130 to test the code that I was patching on the test node. this can be improved
134 If you need to invoke something on a given plc, append '.1' or '.2' to the target name
136 Will run the 'db-clean' target on the plc1 node
140 will run db-clean on both plc's
142 When something needs to be ran in the chroot jail, run
143 $ make sometarget.chroot
144 that will make sometarget from the chroot jail
146 So from the test server
147 $ make sometarget.chroot.3
148 runs in both chroot jails