All fields are required.
Electric rates continue to rise, and to mitigate the impact, smart facility managers are examining options for controlling their usage. Care is needed, however, in the economic analysis of such upgrades. Some efficiency choices may reduce consumption more than peak electric demand, and using average electric rates to evaluate them may result in disappointment. Understanding how rates are changing may go a long way to navigating those changes.
Unlike residential electric rates that charge only for the kilowatt-hours (kWh) used each month, commercial facilities may be billed for both consumption and for how quickly they consume electricity, also known as demand. That speed of electric use is measured in kilowatts (kW), or in kilovolt-amps (kVA). Think of a kW as a kWh per hour: Your peak kW demand is the fastest rate of electricity use during a monthly billing period.
Demand charges are typically based on the highest peak kW seen each month. Depending on the tariff, that peak may either occur at any time, or be based on a time window such as 8 a.m. to 6 p.m. on weekdays. It is not, however, based on the very brief instantaneous demand spike that occurs when a motor or light is first started. Instead, it's often calculated as the highest kWh consumed in a 15-, 30-, or 60-minute period during a month. That consumption is divided by the length of the time interval to derive the peak billed demand.
This means that a single rapid usage, such as running all chillers for 15 minutes at one time during a hot spell, perhaps only once in a month, could set the peak demand charge for that month. Where a ratchet (also called "contract demand") charge exists, such a spike could also set a charge that would be levied each month for an entire year.