OverviewIn this codelab, you will build a client-side Autotest to check the disk and cache throughput of a ChromiumOS device. You will learn how to: 1. Setup the environment needed for autotest 2. Run and edit a test 3. Write a new test and control file 4. Check results of the test In the process of doing so you will also learn a little about the autotest framework.BackgroundAutotest is an open source project designed for testing the linux kernel. Before starting this codelab you might benefit from scrolling through some upstream documentation on autotest client tests. Autotest is responsible for managing the state of multiple client devices as a distributed system, by integrating a web interface, database, servers and the clients themselves. Since this codelab is about client tests, what follows is a short description of how autotest runs a specific test, on one client. Autotest looks through all directories in client/tests and client/site_tests for simple python files that begin with ‘control.’. These files contain a list of variables, and a call to job.run_test. The control variables tell autotest when to schedule the test, and the call to run_test tells autotest how. Each test instance is part of a job. Autotest creates this job object and forks a child process to execute its control file. Note the exec mentioned above is the python keyword, not os.exec Tests reside in a couple of key locations in your checkout, and map to similar locations on the DUT (Device Under Test). Understanding the layout of these directories might give you some perspective:
Prerequisites
ObjectivesIn this codelab, we will:
Running a test on the clientFirst, get the autotest source: a. If you Got the Code, you already have autotest. b. If you do not wish to sync the entire source and reimage a device, you can run tests in a vm.
If the cros_start_vm scripts fails you need to enable virtualization on your workstation. check for /dev/kvm or run ‘sudo kvm-ok’ (you might have to ‘sudo apt-get install cpu-checker’ first). It will either say /dev/kvm exists and kvm acceleration can be used or that /dev/kvm doesn’t and kvm acceleration can NOT be used. In the latter case, hit esc on boot and go to ‘system security:’, turn on virtualization. More information about running tests on a vm can be found here. Once you have autotest, there are 2 ways to run tests, either using your machine as a server or from the client DUT. Running it directly on the device is faster, but requires invoking it from your server at least once. Through test_that1. enter chroot: cros_checkout_directory$ cros_sdk2. Invoke test_that, to run login_LoginSuccess on a vm with local autotest bits: test_that localhost:9222 login_LoginSuccessThe basic usage of test_that: test_that -b board dut_ip[:port] TESTTEST can be the name of the test, or suite:suite_name for a suite. For example, to run the smoke suite on a device with board x86-mario test_that -b x86-mario 192.168.x.x suite:smokePlease see the test_that page for more details. Directly on the DUTYou have to use test_that at least once so it copies over the test/dependencies before attempting this; If you haven’t, /usr/local/autotest may not exist on the device. ssh root@<ip_address_of_dut>, password=test0000
Once you're on the client device:
Editing a TestFor python-only changes, test_that uses The fastest way to edit a test is directly on the client. If you find the text editor on a Chromium OS device non-intuitive then edit the file locally and use a copy tool like rcp/scp to send it to the DUT. 1. Add a print statement to the login_LoginSuccess test you just ran 2. rsync it into /usr/local/autotest/tests on the client rcp path/to/login_LoginSuccess.py root@<DUT_ip>:/usr/local/autotest/tests/login_LoginSuccess/3. run it by invoking autotest_client, as described in the section on Running Tests Directly on the client. Note a print statement won’t show up when the test is run via test_that. The more formal way of editing a test is to change the source and emerge it. The steps to doing this are very similar to those described in the section on emerging tests. You might want to perform a full emerge if you’ve modified several files, or would like to run your test in an environment similar to the automated build/test pipeline. Writing a New TestA word of caution: copy-pasting from Google Docs has been known to convert consecutive whitespace characters into unicode characters, which will break your control file. Using CTRL-C + CTRL-V is safer than using middle-click pasting on Linux. Our aim is to create a test which does the following:
1. Create a directory in 2. Create a control file kernel_HdParmBasic/control, a bare minimum control file for the hdparm test: AUTHOR = "Chrome OS Team" NAME = "kernel_HdParmBasic" TIME = "SHORT" TEST_TYPE = "client" DOC = """ This test uses hdparm to measure disk performance. """ job.run_test('kernel_HdParmBasic', named_arg='kernel test') To which you can add the necessary control variables as described in the autotest best practices. Job.run_test can take any named arguments, and the appropriate ones will be cherry picked and passed on to the test. 3. Create a test file: At a bare minimum the test needs a run_once method, which should contain the implementation of the test; it also needs to inherit from test.test. Most tests also need initialize and cleanup methods. Create a test file
Notice how only run_once takes the argument named_arg, which was passed in by the Emerging and RunningBasic flow:
#third_party/chromiumos-overlay/chromeos-base/autotest-tests/autotest-tests-9999.ebuildIUSE_TESTS="${IUSE_TESTS} # some other tests # some other tests # ... +tests_kernel_HdParmBasic"
cros_workon --board=lumpy start autotest-tests
emerge-lumpy chromeos-base/autotest-tests(if that fails because of dependency problems, you can try cros_workon --board=lumpy autotest-chrome and append chromeos-base/autotest-chrome to the line above)
test_that -b lumpy DUT_IP kernel_HdParmBasicsIf you’d like more perspective you might benefit from consulting the troubleshooting doc. Checking resultsThe results folder contains many logs, to analyze client test logging messages you need to find kernel_HdParmBasic.(DEBUG, INFO, ERROR) depending on which logging macro you used. Note: logging message priorities escalate, and debug < info < warning < error. If you want to see all logging messages just look in the debug logs. Client test logs should be in: where you will have to replace ‘ You can also find the latest results in In the DEBUG logs you should see messages like: 01/18 12:22:46.716 DEBUG| kernel_HdParmBasic:0025| Your logging messageNote that print messages will not show up in these logs since we redirect stdout. If you’ve already performed a ‘run_remote’ once you can directly invoke your test on a client, as described in the previous section. Two things to note when using this approach: a. print messages do show up b. logging messages are also available under autotest/results/default/ Import helpersYou can import any autotest client helper module with the line from autotest_lib.client.<dir> import <module>You might also benefit from reading how the framework makes autotest_lib available for you. kernel_HdParmBasic Needs test.test, so it needs to import test from client/bin. Looking back at our initial test plan, it also needs to: 1. run This implies running things on the command line, modules to look at are base/site utils. However common_lib’s ‘utils.py’ conveniently gives us both. from autotest_lib.client.bin import test, utils2. Search output for timing numbers. 3. Report this as a result. import logging, rerun_once, cleanup and initializeIf your test manages any state on the DUT it might need initialization and cleanup. In our case the subprocess handles it’s own cleanup, if any. Putting together all we’ve talked about, our run_once method looks like:
Note the use of performance keyvals instead of plain logging statements. The keyvals are written to --------------------------------------- kernel_HdParmBasic/kernel_HdParmBasic cache_throughput 4346.76 kernel_HdParmBasic/kernel_HdParmBasic disk_throughput 144.28 --------------------------------------- |
