Fixed PackageWatchdog health check state
1. Receiving List<PackageInfo>: Since I29e2d619a5296716c29893ab3aa2f35f69bfb4d7, we now receive a List of PackageInfo instead of Strings for packages supporting explicit health checks. Now, we parse this List<PackageInfo> from ExtServices instead of trying to parse List<String> and we use the health check timeout in the PackageInfo as the health check expiry deadline instead of using the total package expiry time. 2. Updating health check durations onSupportedPackages: Before, we always updated the health check duration for a package if the package is supported and the health check state is not PASSED, this caused the health check duration for a package to never reduce as long as we kept getting onSupportedPackages. Now, we improved the readability of the state transitions onSupportedPackages. We now correctly only update the health check duration for supported packages in the INACTIVE state. 3. FAILED state: Before we only had INACTIVE, ACTIVE and PASSED states. When a package has failed the health check we could notify the observer multiple times in quick succession and get into a bad internal state with negative health check durations. Now we added check to ensure we don't try to schedule with a Handler with a negative duration and we defined a negative health check duration to be a new FAILED state if the health check is not passed. This clearly defines the state transitions as seen below: +----------+ +---------+ +------+ | | | | | | | INACTIVE +---->+ ACTIVE +--->+PASSED| | | | | | | +-----+----+ +----+----+ +------+ | | | | | | | | | +----v----+ | | | +----------> FAILED | | | +---------+ 4. Uptime state: Everytime we pruned observers, we scheduled the next prune and stored the current SystemClock#uptimeMillis. This allowed us determine how much time had elapsed for the next prune. The uptime was not correclty updated when starting to observe already observed packages. With the following sequence of events: -monitor package A for 1hr -30mins elapsed -monitor package A again for 1hr A would expire 30mins from the last event instead of 1hr. This was because the second time around, we saved the new state to disk but did not reschedule so did not update the uptime at last schedule, so 1hr from the first event, we would prune packages with the original uptime and incorrectly expire A earlier. Now we update all internal state, fixed this and added a test for this case. 5. Readability Improved method variable names, logging and comments. Bug: 120598832 Test: Manual testing && atest PackageWatcdogTest Change-Id: I1512d5938848ad26b668636405fe9b0db50d3a2e
Loading
Please register or sign in to comment