How is cpu load measured




















Let Linux show you how busy each Processor is with this Command Line request. Add a comment. Active Oldest Votes. Improve this answer. Community Bot 1. Kyle Brandt Kyle Brandt Time slices are not measured in billionths of a second. They are not that short. They are more likely somewhere between 0. Resolution of time values in APIs is not the same as the rate of timer interrupts.

Some API calls in Linux have times specified in nanoseconds, but you wouldn't want timer interrupts that frequently. If you had a million interrupts per second, you would spend all the CPU time on context switches. Do you mean ticks? Or do you mean something else? The man page says: "The corresponding variable is obsolete. Aha, no, they're not reported the same. The more cores system has, the more tasks system can handle in parallel.

Below is an example to understand the relationship between load average and CPU cores: up , 5 user, load average: 1. Additional Resources.

Not finding your answer? Ask our community of users and experts. Ask a question. Number of Views Specifically, the histogram analysis of the time variation can be used to help the tester discern which data represent the measured background period that has executed uninterrupted and those that have been artificially extended through context switching. Figure 1: Sample Histogram. Figure 1 shows a histogram of an example data set.

This data set contains a time variation of the measured idle-task period. This knowledge can help you isolate which histogram data to discard and which to keep. Once you know the average background-task execution time, you can measure the CPU utilization while the system is under various states of loading.

Obviously there's no way yet to measure CPU utilization directly. You'll have to derive the CPU utilization from measured changes in the period of the background loop. You should measure the average background-loop period under various system loads and graph the CPU utilization.

For example, if you're measuring the CPU utilization of a engine management system under different systems loads, you might plot engine speed revolutions per minute or RPM versus CPU utilization. Assume the average background loop is measured given the data in Table 1. Note that the background loop should only be collected after the system has been allowed to stabilize at each new load point.

Now you've collected all the information you'll need to calculate CPU utilization under specific system loading. Recall from Equation 1 that the CPU utilization is defined as the time not spent executing the idle task. The amount of time spent executing the idle task can be represented as a ratio of the period of the idle task in an unloaded CPU to the period of the idle task under some known load, as shown in Equations 1 and 2.

Table 2 shows the results of applying Equations 1 and 2 to the data in Table 1. Figure 2 shows the salient data in graphical form.

Of course you'll want to reduce the amount of manual work to be done in this process. With a little up-front work instrumenting the code, you can significantly reduce the labor necessary to derive CPU utilization.

Figure 2: CPU utilization vs. Counting background loops The next method is actually a simple advance on the use of the LSA and histogram. The concept is that, under ideal nonloaded situations, the idle task would execute a known and constant number of times during any set time period one second, for instance.

Most systems provide a time-based interrupt that you can use to compare a free-running background-loop counter to this known constant. Let's say we use a 25ms period task to monitor the CPU utilization. We enhance the while 1 loop of Listing 2 so that a free-running counter is incremented every time through the loop as shown in Listing 3. A free-running counter uses a variable that, when incremented, is allowed to overflow.

No math protection is needed or desired because the math that will look for counter changes can comprehend an overflow situation. Math protection would just add unnecessary overhead. We still know the average nonloaded background-loop period from the LSA measurements we collected and postprocessed.

Therefore, in a 25ms time frame, the idle task would execute times if it were never interrupted. We must modify the 25ms task as shown in Listing 4 to use this count to calculate the CPU utilization, and we have to retain the previous loop count so that a delta can be calculated. The delta indicates how many times the background loop executed during the immediately previous 25ms timeframe. Comparing this value to the maximum loop count indicates how much time was spent in the idle task versus doing other processing.

This is a common scaling trick used to maximize the resolution of a variable. To convert back to real percentage, use Equation 4. The conversion from computer units back into engineering units can be done after you've collected the data.

This allows the end result to retain as much resolution as possible. Of course, if you're using floating-point math, you can do the conversion in the actual C code. Once these changes have been implemented, you must be able to retrieve the CPU-utility value from the processor. You then pull the data into a spreadsheet and manipulate it to create the graph shown previously in Figure 2. Table 3 shows how the data would look and some of the intermediate calculations you can do.

Some instrumentation solutions allow the scaled value to be converted from computer units to engineering units automatically. Automating the system Although counting background loops is more convenient than collecting all of the data on an LSA, it still requires a fair amount of human preparation and verification.

Every time the software set is changed, a human tester must verify that the background loop hasn't changed in some way that would cause its average period to change. If the loop has changed, a human must reconnect the LSA, collect some data, statistically analyze it to pull out elongated idle loops loops interrupted by time and event tasks , and then convert this data back into a constant that must get injected back into the code.

Users are regularly waiting for CPU time, and are probably experiencing significantly degraded performance. Linux measures things a little differently, and understanding how is key to good system administration. Linux measures CPU load by looking both at programs that are currently using or waiting for CPU time in addition to programs that are in waiting states.

On Linux, it does. Whether this is a better or worse approach depends on your use case. Unix would never surface that data at all. Sometimes, the process of optimizing your CPU loads is pretty easy.

Maybe your server will only ever utilize two threads at the same time. In that case, optimizing server loads is no more complicated than having a dual-core processor. Most servers are much more complicated. And one of the real challenges is keeping the number of cores as low as possible, because that saves you money, without going too low and degrading user experiences. It could be that you need to optimize the code your server runs. It could be that you really do need to up the core count on your processor.

Bottlenecked servers can have a number of causes, and can be fixed in a number of ways.



0コメント

  • 1000 / 1000