HomeMy WebLinkAbout20090326IPC to Staff 1-16.pdfesIDA~PO~
i~~q~. .
An IDACORP Company
'-uhJ-\O\ ' t: ,:.
BARTON L. KLINE Ui \U1 \ c.)
Lead Counsel
March 25,2009
VIA HAND DELIVERY
Jean D. Jewell, Secretary
Idaho Public Utilities Commission
472 West Washington Street
P.O. Box 83720
Boise, Idaho 83720-0074
Re: Case No. IPC-E-09-02
EnerNOC
Dear Ms. Jewell:
Enclosed for filing please find an original and three (3) copies of Idaho Powets
Response to the Commission Staffs First Production Request to Idaho Power Company in
the above matter.
Also, enclosed in a separate envelope are an original and three (3) copies of Idaho
Power's Confidential Response to the Commission Staffs First Production Request to
Idaho Power Company. Please note this information should be handled in accordance with
the Protective Agreement between Idaho Power and Staff.
Very truly yours,(21~Barton L. Kline
BLK:csb
Enclosures
P.O. Box 70 (83707)
1221 W. Idaho St.
Boise, 10 83702
BARTON L. KLINE, ISB #1526
LISA D. NORDSTROM, ISB #5733
Idaho Power Company
P.O. Box 70
Boise, Idaho 83707
Telephone: 208-388-2682
Facsimile: 208-338-6936
bklineCâidahopower.com
InordstromCâidahopower.com
R t\J
2009 MAR 25 PM 4: 57
Attorneys for Idaho Power Company
Street Address for Express Mail:
1221 West Idaho Street
Boise, Idaho 83702
BEFORE THE IDAHO PUBLIC UTILITIES COMMISSION
IN THE MAnER OF IDAHO POWER
COMPANY'S APPLICATION FOR
APPROVAL OF AN AGREEMENT TO
IMPLEMENT A COMMERCIAL
DEMAND RESPONSE PROGRAM.
)
) CASE NO. IPC-E-09-02
)
) IDAHO POWER COMPANY'S RESPONSE
) TO THE COMMISSION STAFF'S FIRST
) PRODUCTION REQUEST TO IDAHO
) POWER COMPANY
COMES NOW, Idaho Power Company ("Idaho Powet' or "the Company"), and in
response to the First Production Request of the Commission Staff to Idaho Power
Company dated March 17,2009, herewith submits the following information:
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 1
REQUEST NO.1: On page 2 of the Application Idaho Power (Company) says
that "EnerNOC has been selected by Idaho Power through a competitive RFP process
to implement the Program on a turn-key basis." Please identify all RFP participants and
explain why EnerNOC was chosen.
RESPONSE TO REQUEST NO.1: This response is confidential and has been
produced pursuant to the Protective Agreement.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 2
REQUEST NO.2: On page 2 of the Application the Company says that
"EnerNOC has successfully implemented similar programs for other utilities throughout
the country." Please list utilities for which EnerNOC has implemented similar programs
and describe the success of those programs.
RESPONSE TO REQUEST NO.2: EnerNOC is the largest provider of
"aggregator-based" demand response programs to utilities and grid operators in the
United States, with over 2,500 MW of load reduction capacity currently under
management. EnerNOC is currently implementing similar programs for numerous
utilities, including Xcel Energy, Puget Sound Energy, Salt River Project, Pacific Gas &
Electric, Southern California Edison, San Diego Gas & Electric, Public Service
Company of New Mexico, Tampa Electric Company, Burlington Electric Company, and
more than 40 utility customers of the Tennessee Valley Authority. Most of these utility
contracts were won through a competitive RFP process, and each program is similar in
structure to that proposed for Idaho Power Company, though each has specific
attributes in terms of MW commitments, response time, hours of dispatch, and other
program parameters.
In most cases, utilities measure program success by the following factors: (1)
achieving committed capacity levels by the milestones established in the contract, (2)
delivering committed load reductions reliably when dispatched, and (3) maintaining high
levels of customer satisfaction.
In addition, EnerNOC is implementing demand response programs under
contract to grid operators in every deregulated utility market, including ISO-New
England, PJM Interconnection, New York ISO, Ontario Power Authority, and the Electric
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 3
Reliability Council of Texas. Across all EnerNOC's 100 plus events in 2008, event
performance (measured as delivered capacity over contracted capacity) averaged 102
percent.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 4
REQUEST NO.3: On page 3 of the Application the Company says that "A load
reduction plan wil be created for each participant with the goal of achieving the desired
load reduction without negatively impacting business operations." Please provide
examples of such load reduction plans. Wil Idaho Power have access to these plans?
If not, why not? If so, how wil Idaho Power be able to utilize this information in future
operations?
RESPONSE TO REQUEST NO.3: A customized load reduction plan wil be
created for each customer. The load reduction plan lists all load-reducing strategies
specific to that customer. Examples of simple curtailment plans are one-line
instructions such as:
1. Shut down asphalt plant
2. Reduce lighting to "night setback" mode
More complicated load reduction plans contain detailed instructions, such as:
1. Disable left corridor supply fans SF-10, 11, and 12 using the BMS
2. Disable chiled water (CHW) and condenser water (CW) pumps via
BMS
3. Disable the two new electric hallway heaters manually in Valet area
4. Disable the two front vestibule heaters manually
5. Disable SF-2 supplemental electric heaters via BMS
6. Curtail 1/3 lighting in ballrooms 1 & 2 via AlB switching
Idaho Power wil have access to these plans. The Company has not determined
how it might utilize this information in future operations, but would not be restricted in its
use.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 5
The response to this Request was prepared by Bilie Jo McWinn, Commercial
Program Specialist, Idaho Power Company, in consultation with Barton L. Kline, Lead
Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 6
REQUEST NO.4: On page 3 of the Application the Company says that
"EnerNOC wil reimburse Idaho Power for all costs associated with installng the pulse
initiated metering devices, including the cost of the metering devices themselves."
Please explain how often the Company is estimating these pulse initiated meter
installations to occur, what the time period wil be before EnerNOC reimburses Idaho
Power for its installation costs, and what the estimated costs associated with installng
pulse initiated metering devices wil be.
RESPONSE TO REQUEST NO.4: Idaho Power estimates that less than 20
meters wil need to be equipped with pulse output boards for the summer of 2009, and
the installations wil occur when EnerNOC notifes Idaho Power that a customer site
requires pulse initiated metering. Idaho Power wil determine the appropriate metering
equipment required to enable pulse output at that site and wil install the equipment at
that site. Idaho Power wil record labor and material costs, which are estimated to be
less than $300 per site in most cases, and bil EnerNOC monthly, on a net 30 basis. If
the costs are estimated to exceed $300 per site, Idaho Power wil notify EnerNOC in
advance for their approval prior to any work being completed.
The response to this Request was prepared by Bille Jo McWinn, Commercial
Program Specialist, Idaho Power Company, in consultation with Barton L. Kline, Lead
Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 7
REQUEST NO.5: Does the "pulse initiated metering device" differ from the AMI
metering that the Company is currently installng and the meters that Transmission and
Primary Schedule 9 and Schedule 19 customer currently have? If so, fully explain the
differences.
RESPONSE TO REQUEST NO.5: "Pulse initiated" refers to the pulse output
available at the metering device, whether or not the metering device is AMI. Currently,
Schedule 9 Transmission and Primary, in addition to Schedule 19, customer meters are
pulse initiated, and most would only need a box installed to house contacts for
connection to EnerNOC equipment. Schedule 9 Secondary customers that do not
currently have pulse output available at the metering device would require the
installation of a pulse output board in the meter, whether or not the metering device is
AMI.
The response to this Request was prepared by Bille Jo McWinn, Commercial
Program Specialist, Idaho Power Company, in consultation with Barton L. Kline, Lead
Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 8
REQUEST NO.6: Please describe the Company's interim and long-term
evaluation process and evaluators for the Commercial Demand Response Program.
Based on interim evaluation results, wil Idaho Power be able to end the program prior
to the five-year term?
RESPONSE TO REQUEST NO.6: Idaho Power plans to conduct individual
post-event evaluations, as well as a full impact and process evaluation once the
program has reached maturity. For individual events, energy effciency analysts and
evaluators wil reconcile event data with reported results from EnerNOC to ensure each
event is performing as reported. This data wil be evaluated using 15 minute interval
data for the duration of each program event and for each individual customer.
In addition to internal evaluation, Idaho Power plans to conduct a third-part
impact and process evaluation regarding program delivery, customer satisfaction, trued-
up program savings, and overall program success. These evaluations are planned for
the summer immediately following the first summer of full program maturity, expected to
be in 2011.
Idaho Power is committed to the full five-year contract term. If at any time the
program is found to be unsuccessful and/or not cost effective, Idaho Power and
EnerNOC wil work together to make any program changes necessary for program
success through the remaining length of the five-year contract.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 9
REQUEST NO.7: The agreement allows "Idaho Power to require demand
reduction up to sixt hours per season with up to twenty events per season." What
strategy wil the Company use to determine the total events called, the timing of each
event, and the duration of each event in order to maximize the Programs benefit?
RESPONSE TO REQUEST NO.7: Idaho Power's demand response program
specialist and analysts wil work closely with generation dispatch to determine the
qualifying factors and parameters for callng a demand response event. The
Company's main goal with this program is to call an event on system peak days. While
there is a slight disincentive to call events more than eight times per month, it is stil cost
effective to the Company to call up to all 20 events if necessary, which, if needed, would
produce the most benefit for this program.
Idaho Power dispatchers and demand response program specialists wil work
together to ensure that all demand response programs are being called in order of
timing and duration that provides the most benefit from each program, as well as to the
Company and its customers.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY -10
REQUEST NO.8: How wil Idaho Power "monitor and confirm" EnerNOC's
reported "event" results?
RESPONSE TO REQUEST NO.8: Idaho Power wil be tracking customer and
event data on its own internal database with the same data received from the metering
equipment installed by EnerNOC. Idaho Power has worked closely with EnerNOC to
establish a process for reconciliation and confirmation of demand reduction for all
events. After an event is completed, Idaho Power wil be sent all customer event data,
as stated in Section 9.1 of the contract, for complete review and analysis. Idaho Power
analysts wil compare internal results to EnerNOC's to confirm program performance
results and detect any discrepancies between Idaho Power and EnerNOC. If any
inconsistencies arise between the results sent by EnerNOC and Idaho Power's internal
analysis, Idaho Power wil follow the dispute resolution procedures stated in Section
11.5 of the contract to resolve any differences in the reported results.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 11
REQUEST NO.9: Please provide lists of the invitees to and attendees at the
February 25, 2008 Industrial Customer program meeting. What was the consensus of
the group regarding the anticipated success of the Program? Please provide copies of
Idaho Powets notes taken at this meeting and subsequent correspondence among
Idaho Power employees related to this meeting.
RESPONSE TO REQUEST NO.9: On February 25, 2009, Idaho Power
attended a meeting of the Industrial Customers of Idaho Power ("ICIP") to present the
proposed Commercial DR program. Idaho Power attended ICIP's meeting and
therefore does not have a list of invitees. Those attending either in-person or by phone
for ICIP were Pete Richardson, Don Sturtevant, Don Reading, David Hawk, Bobby
Adam, and Mike Henderson. Those attending for Idaho Power were Bilie'McWinn,
Program Specialist; Danielle Giddings, Energy Effciency Analyst; Darlene Nemnich,
Senior Pricing Analyst; and Brad Davids, Senior Director, representing EnerNOC.
There was no consensus taken regarding the anticipated success of the program.
There were no offcial notes taken for the meeting; however, the topics discussed
included the 2-hour notice window, matching the event duration to the On-Peak time
block, how contracts wil be negotiated with EnerNOC, and the desirability to have more
programs available to industrial customers.
The response to this Request was prepared by Darlene Nemnich, Senior Pricing
Analyst, Idaho Power Company, in consultation with Barton L. Kline, Lead Counsel,
Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 12
REQUEST NO. 10: Idaho Power's proposed contract with EnerNOC states that
up to twenty events may be called from June 1st to August 31st of each year. Why did
Idaho Power determine 20 events as the maximum negotiated limit instead of the
anticipated 15 events it expects wil occur? Why did the Company negotiate 60 event
hours per season that would result in event durations of 4 hours?
RESPONSE TO REQUEST NO. 10: Prior to issuing an RFP, Idaho Power
spoke with aggregators, as well as utilities that had implemented commercial demand
response programs, to get an overall idea of the number of event hours that not only
maximize utility and customer value but also provide a load reduction resource at the
lowest negative impact to the customer. Idaho Power then researched its own internal
summer load patterns and, combined with the customer impacts, determined the target
number of event days and hours which would provide the highest value to Idaho Power
and its commercial and industrial customers. Discussions were held with Idaho Power
system dispatchers and daily load forecasters to determine the number of hours and
length of events that would maximize the system benefit while minimizing customer
impact.
The cost effectiveness of the program also played a role in the maximum number
of events and event duration. Customers are increasingly impacted by each additional
event over the summer season. EnerNOC's proposed capacity and energy charge to
Idaho Power increased as the total number of event season hours grew to reflect the
incremental effort required to reach contracted load reduction. Idaho Power completed
sensitive analysis on different event and hours scenarios to determine that a maximum
of 20 events and 60 hours would provide the overall highest cost effective value. These
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 13
event parameters also closely align with the other Idaho Power demand response
programs.
The 2-4 hour event duration reflects the number of hours the Company faces
critical peak levels during operating hours of commercial buildings. While peak loads
can last long into the evening, a majority of commercial loads drop after 6 p.m., and
demand response is no longer needed. Events lasting longer than 4 hours also limit the
number of potential customer able to participate if their loads would not be able to be
reduced or curtailed for that length of time.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 14
REQUEST NO. 11: Was a costlenefit analysis completed comparing EnerNOC
managing this Program vs. the Company? If yes, please provide this analysis in
executable electronic format.
RESPONSE TO REQUEST NO. 11: A traditional cost/benefit analysis was not
completed comparing EnerNOC managing the program versus the Company.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY -15
REQUEST NO. 12: Explain how variable energy payments made to EnerNOC
affect cost-effectiveness. In addition, provide detail on how the Company plans to
strategically utilize its number of events and event durations in order to maximize utility
net benefit.
RESPONSE TO REQUEST NO. 12: Program costs consist of both capacity and
energy payments. The capacity payments to EnerNOC are fixed based on a committed
demand reduction. Energy payments are variable and dependent on the number of
events called during each event week and month.
Energy payments make up a small percentage of total payments to EnerNOC
and have minimal impact on total program cost effectiveness. In the likely case
scenario where 5 events in July would carry energy payments, the expected total
resource cost ratio in year 5 is 1.12. If Idaho Power is required to call all events in July,
resulting in 20 total events and 12 events with energy payments, the total resource cost
ratio drops to 1.04. The total dollar difference between these two scenarios is
$963,000, or 7 percent of total estimated contract costs.
Idaho Power realizes that this demand response program holds the most value
when it is able to be utilized whenever necessary. Given that the program is cost
effective even in the event that all potential energy payments must be paid to EnerNOC,
the Company is not currently planning to strategically utilize the number of events over
the event season. Idaho Power program managers are planning to work closely with
Company dispatch to ensure that commercial demand response events are being called
only when necessary and when it provides the highest value to the Company and its
customers.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 16
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 17
REQUEST NO. 13: Please provide an explanation outlining the chances that the
third scenario ("fewer than eight events per month") in the Company's Application would
occur?
RESPONSE TO REQUEST NO. 13: Historical summer demand data from 2005,
2006, and 2007 was used to create a forecast of potential event patterns for each
summer month. Idaho Power took the top 20 peak days of each summer and
determined in what month each peak day occurred. In all of the summer months
analyzed, at least one of the top 15 peak days fell in each month. When the top 20
peak days were analyzed, at least two peak days fell in each of the months June and
August. For each summer season, a majority of the top peak days occurred in July.
Based on historical trends, Idaho Power established a "likely" scenario of the
anticipated number Of events called each month, with one event called in June, 13 in
July, and 1 in August. Given historical trends, it is not likely that the number of events
called each month wil be allocated evenly. It is also not likely that fewer than 8 events
wil be called in July.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY -18
REQUEST NO. 14: Please explain the logic behind the '3 in 10' model for
determining baseline energy usage.
RESPONSE TO REQUEST NO. 14: Idaho Power consulted with EnerNOC, as
well as utilities with commercial demand response programs, to help determine a
method for measuring baseline. Since baseline measurements are a critical component
of accurately valuing demand response events, the industry has conducted numerous
evaluations and reports on what is the most accurate and fair method for estimating
baseline.
As stated by EnerNOC, the most accurate way to determine the "shape" of the
baseline for a given facility is to use recent days that are similar to those when events
are likely to be called. Since events are likely to be called on business days when the
temperature is high, using the average of the highest 3 of the last 10 non-event
business days tends to provide a good approximation of the load shape on the event
day.
During contract negotiation, EnerNOC provided Idaho Power with a report from
the Lawrence Berkeley National Laboratory titled Estimating Demand Response Load
Impacts: Evaluation of Baseline Load Models for Non-Residential Buildings in California
(attached hereto). This report studied various methods of calculating baseline
measurements, methodology strengths and weaknesses, and recommendations for
continued research.
The report refers to baseline estimates as the Baseline Load Profile ("BLP").
Seven different BLP scenarios are evaluated, with BLP3 being the 3 in 10 model with a
morning adjustment factor. As stated on page iii of the report, key findings include:
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY -19
1. The accuracy of the BLP model currently used by California utilities
to estimate load reductions in several DR programs (Le., hourly usage in highest 3 out
of 10 previous days) could be improved substantially if a morning adjustment factor
were applied for weather-sensitive commercial and institutional buildings.
2. Applying a morning adjustment factor significantly reduces the bias
and improves the accuracy of all BLP models examined in our sample of buildings.
3. For buildings with low load variability, all BLP models perform
reasonably well in accuracy.
During its visit to a California utility who contracts with EnerNOC, Idaho Power
learned that the 3 in 10 model was the method used for their commercial demand
response programs, which have been running and evaluated since 2005. This
California utility determined that the 3 in 10 model was not only the most accurate
methodology for calculating baseline, but also prevented potential customer gaming.
Idaho Power has decided to use the 3 in 10 model with a 2-hour morning
adjustment to estimate customer baseline. The Company is confident that this method
best leverages the knowledge and research already developed by the industry. Upon
program deployment, Idaho Power wil closely monitor and evaluate the 3 in 10 model
to verify if this method creates the most accurate results and highest value for Idaho
Power and its customers.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 20
REQUEST NO. 15: Please provide the EnerNOC reports and methods used to
determine 5% as an accurate approximation of 'snapback' given actual events with
other utilities.
RESPONSE TO REQUEST NO. 15: Idaho Power requested the knowledge of
EnerNOC to assist in the snapback analysis. EnerNOC's experience with other utilities
allowed for actual snapback data of commercial and industrial customers participating in
demand response events in the western United States. EnerNOC provided the
Company with actual customer event data which gave examples of post-event load
increases (attached hereto).
In addition to the report, Brad Davids, Senior Director of EnerNOC, stated to
Idaho Power on October 23, 2008:
The maximum snapback we observed at any point was 14%
of the nominated capacity, or 6% of the baseline, but in most
cases it was negligible. Unless the utility wants snapback
(due to revenue erosion concerns), we work very hard to
minimize it when restoring sites to full operation after events.
Because EnerNOC is aware of what systems are being curtailed across all of
their participants, EnerNOC is able to bring load back onto the system ina manner that
does not create an immediate and substantial snapback effect. In addition, the end
uses being curtailed also have a large impact on the amount of potential snapback
effects. Unlike residential AlC cycling where the unit has to make up for the increase in
temperature during an event, a substantial amount of commercial curtailment is from
lighting, which does not result in an increase of load over what would have been normal
operations.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 21
After reviewing the report and comments by EnerNOC, as well as conducting its
own research on commercial and industrial snapback effects, Idaho Power determined
that a 5 percent increase in energy usage above baseline after an event is
representative of what is expected to occur.
This data wil be closely monitored on an aggregate and individual level during
the first event season and wil be adjusted accordingly as actual Idaho Power load data
is compiled and analyzed. The snapback effect is also a key piece of the full program
impact evaluation.
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 22
REQUEST NO. 16: Please provide information by customer class on the
number of Primary, Secondary, and Transmission level customers anticipated to
participate in the Program. What are the current total number of customers in each
group and each group's estimated peak demand between the hours of 2 p.m. and 8
p.m. during June, July and August of 2008? What percentage of load reductions
(losses) were assumed for each group? Include an explanation of how the Company
has estimated the loss coeffcients associated with each level of service in determining
the Program's cost-effectiveness.
RESPONSE TO REQUEST NO. 16: The graph below is a breakdown of each
eligible customer class for the commercial demand response program, along with each
class's total peak demand for each summer month.
Total Measured Demand (MW)
Customer
Rate Schedule CountJ\June July August
95*552 171.0 178.5 179.7
9P 134 78.5 79.4 81.3
195 1 1.18 1.24 1.22
19P 103 300.8 318.1 319.6
19T 3 9.8 9.8 9.7
Total 793 561.3 587.1 591.5
* greater than 200 kW average summer demand
1\ As of 12/31/2008 - Idaho Customers Only
EnerNOC wil be provided with a list of all Schedule 9 and 19 customers with
average summer kW greater than 200. It is up to EnerNOC to determine what number
of customers to target within each customer class that wil provide Idaho Power with the
contracted amount of demand reduction. Idaho Power wil not be aware of the
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 23
customers participating in the program until after customers have contracted with
EnerNOC.
Idaho Power followed its standard process for estimating loss coeffcients
associated with each voltage level, consistent with the methods used in the last general
rate case (Case No. IPC-E-08-10). Below are the peak demand coeffcients used in the
cost effectiveness analysis.
Average System Loss Coeffcients
Typical Peak Demand
System Level Coeffcients
Transmission 1.055
Distribution Station 1.065
Distribution Primary 1.100
Distribution Secondary*1.130
The response to this Request was prepared under the direction of Pete Pengily,
Leader Customer Research and Analysis, Idaho Power Company, in consultation with
Barton L. Kline, Lead Counsel, Idaho Power Company.
DATED at Boise, Idaho this 25th day of March 2009.
~tLBARTNL
Attorney for Idaho Power Company
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 24
CERTIFICATE OF SERVICE
I HEREBY CERTIFY that on this 25th day of March 2009 I served a true and
correct copy of IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY upon the
following named parties by the method indicated below, and addressed to the following:
Commission Staff
Neil Price
Deputy Attomey General
Idaho Public Utilities Commission
472 West Washington
P.O. Box 83720
Boise, Idaho 83720-0074
-l Hand Delivered
U.S. Mail
_ Overnight Mail
FAX
-l Email Neil.priceCâpuc.idaho.gov
(JtQ~Barton L. Kline
IDAHO POWER COMPANY'S RESPONSE TO THE COMMISSION
STAFF'S FIRST PRODUCTION REQUEST TO IDAHO POWER COMPANY - 25
BEFORE THE.
IDAHO PUBLIC UTiliTIES COMMISSION
CASE NO. IPC-E-09-02
IDAHO POWER COMPANY
RESPONSE TO STAFF'S
PRODUCTION REQUEST NO. 14
LBNL-58939
ERNEST ORLANDO LAWRENCE
BERKELEY NATIONAL LABORATORY
Estimating Demand Response Load Impacts:
Evaluation of Baseline Load Models for Non-
Residential Buildings in California
Katie CougWin, Mar An Piette, Charles Goldman and Sila Kiliccote
Demand Response Research Center
Ernest Orlando Lawrence Berkeley National Laboratory
1 Cyclotron Road, MS 90R4000
Berkeley CA 94720-8136
Environmental Energy Technologies Division
January 2008
http://eetd.llil.gov/ea/EMS/EMS-pubs.Jitml
The work described in this report was fuded by Californa Energy Commssion (Energy
Commssion), Public Interest Energy Research (PIER) Program, under Work for Others
Contract No.1S0-99-003, Am #1, and the Offce of Electrcity Delivery and Energy
Reliability, Permtting, Siting and Analysis of the u.s. Deparment of Energy, under
contract No. DE-AC02-0SCHI1231. The authors are solely responsible for any
omissions or errors contained herein.
Table of Contents
List of Tables ................................................................................................................................. i
List of Figures...............................................................................................................................ii
1. Introduction ........................................................................................................................... 1
1.1. Project Objectives and Analytical Approach ......... ....... ......... ....... ........ ..... .................. 3
1.2. Prior Work........................................................................................................................4
2. Data Processing and Evaluation Metrcs .......................................................................... 6
2.1. Data Sources.....................................................................................................................6
2.2. Proxy Event Days ............................................................................................................ 6
2.3. Model Runs and Diagnostics ......................................................................................... 7
2.3.1. Adjustient Factors...................................................................................................8
2.3.2. Diagnostic Measures.......................... ......... ............. ........ ......... .......... ......... ... .......... 9
3. Weather Sensitivity ..............................................................................................................9
4. Baseline Profile (BLP) Models.......................................................................................... 11
4.1. 10-Day Simple Average Baseline with Morng Adjustient (BLP1) ....................12
4.2. Weighted Average Baseline with Morning Adjustient (BLP2)............................. 12
4.3. Simple Average over the Highest 3 out of 10 Admissible Days with Morning
Adjustient (BLP3) ........................................................................................................ 12
4.4. Simple Average over the Highest 5 out of 10 Admissible Days with Mornig
Adjustient (BLP4) ........................................................................................................ 13
4.5. Seasonal Regression Baseline with Morng Adjustient (BLP5) .......................... 13
4.6. lO-Day Regression Baseline with Morng Adjustient (BLP6) ............................. 13
4.7. Limited Seasonal Regression with Morng Adjustient (BLP7) ........................... 13
5. Results ................................................................................................................................... 13
5.1. Building Characteristics................................................................................................ 14
5.2. Morning Adjustient .................................................................................................... 16
5.3. Bias and Accuracy ......................................................................................................... 19
5.4. Event Day Shed Load Estimates.................................................................................. 22
6. Conclusions and Suggestions for Further Work........................................................... 25
List of Tables
Table 1: Sites included in this study ..........................................................................................3
Table 2: Hourly ran order correlation (ROC) coeffcients .................................................. 11
Table 3: Summary of BLP models evaluated .......................................................................... 12
Table 4: Classifcation by load variabilty (var) and weather sensitivity (ws)................... 16
Table 5: Metrics for the percent hourly error e(d,h) by site and modeL............................ 20
i
Table 6: Metrics for the average hourly load percent error E(d) .........................................21
List of Figures
Figure 5-1: Example results for models BLP3n and BLP3 ....................................................14
Figure 5-2: Maximum, minimum and average hourly load at each site............................. 15
Figure 5-3: Error magnitude for model BLP3 without and with adjustment .................... 17
Figure 5-4: Comparison: probability of error less than 5% with.......................................... 18
Figure 5-5: Predictions of the shed load for event days in California 2005 and 2006....... 23
Figure 6: Aggregate estiated load reduction by baseline model...................................... 24
Acknowledgements
The work described in ths report was funded by the Demand Response Research Center
which is funded by the California Energy Commission (Energy Commission), Public
Interest Energy Research (PIER) Program, under Work for Others Contract NO.150-99-
003, Am #1 and the Office of Electricity Delivery and Energy Reliabilty, Permittg,
Sitig and Analysis of the U.S. Department of Energy under contract No. DE-AC02-
05CH11231.
Researchers who contributed to this paper include Ranjit Bharvirkar, and June Han at
Lawrence Berkeley National Laboratory (LBNL). The authors are grateful for the
support from Mike Gravely, Martha Brook, and Kristy Chew (Californa Energy
Commission). The authors would like to thank the following for review comments on
draf of this report: GlennPerez and John Goodin (CAISO), Carmen Hendrickson
(Enernoc), and Mark Martinez (SCE).
n
Abstract
Both Federal and California state policymakers are increasingly interested in developing
more standardized and consistent approaches to estiate and verify the load impacts of
demand response programs and dynamc pricing tariffs. This study describes a
statistical analysis of the performance of different models used to calculate the baseline
electric load for commercial buildings participating in a demand-response (DR) program,
with emphasis on the importance of weather effects. During a DR event, a variety of
adjustments may be made to building operation, with the goal of reducing the building
peak electric load. In order to determe the actal peak load reduction, an estimate of
what the load would have been on the day of the event without any DR actions is
needed. This baseline load profile (BLP) is key to accurately assessing the load impacts
from event-based DR program and may also impact payment settlements for certain
types of DR program. We tested seven baseline models on a sample of 33 buildings
located in California. These models can be loosely categorized into two groups: (1)
averaging methods, which use some linear combination of hourly load values from
previous days to predict the load on the event, and (2) explicit weather models, which use
a formula based on local hourly temperature to predict the load. The models were
tested both with and without morning adjustments, which use data from the day of the
event to adjust the estimated BLP up or down.
Key findings from this study are:
· The accuracy of the BLP model currently used by Californa utilities to estimate
load reductions in several DR programs (Le., hourly usage in highest 3 out of 10
previous days) could be improved substantially if a mornng adjustment factor
were applied for weather-sensitive commercial and institutional buildings.
· Applying a morng adjustment factor signficantly reduces the bias and
improves the accuracy of all BLP models examined in our sample of buildings.
· For buildings with low load variabilty, all BLP models perform reasonably well
in accuracy.
· For customer accounts with highly variable loads, we found that no BLP model
produced satisfactory results, although averaging methods perform best in
accuracy (but not bias). These types of customers are difficult to characterize
with standard BLP models that rely on historic loads and weather data.
Implications of these results for DR program admistrators and policymakers are:
· Most DR programs apply similar DR BLP methods to commercial and industrial
sector customers. The results of our study when combined with other recent
studies (Quantum 2004 and 2006, Buege et aL., 2006) suggests that DR program
administrators should have flexibility and multiple options for suggesting the
most appropriate BLP method for specific types of customers.
iii
· Customers that are highly weather sensitive, should be given the option of using
BLP models that explicitly incorporate temperature in assessing their
performance during DR events.
· For customers with more variable loads, it may make more sense to direct these
facilities to enroll in DR programs with rules that require customers to reduce
load to a firm service level or guaranteed load drop (e.g. which is a common
feature of interruptible/curtail able tariffs) because DR performance is diffcult to
predict and evaluate with BLP models.
· DR program administrators should consider using weather-sensitivity and
variabilty of loads as screening criteria for appropriate default BLP models to be
used by enrollng customers, which could improve the accuracy of DR load
reduction estiates.
iv
1. Introduction
Both Federal and Californa state policyakers are increasingly interested in developing
more standardized and consistent approaches to estiate and verify the load impacts of
demand response program and dynamc pricing tariffs (e.g. critical peak pricing)
(FERC Staff Report 2006; CPUC 2007). i For example, the California Public Utilty
Commission is overseeing a regulatory process to develop methods to estiate the load
impacts of demand response (DR) programs. These methods wil be useful for
measuring the cost-effectiveness of program, assist in resource planing and long-term
forecasting exercises, and allow the Californa Independent System Operator (CAISO) to
be able to more effectively utiize DR as a resource.
Policymakers are concerned that the methods used to estiate load reductions and
compensate customers and load aggregators are fair and accurate, and that protocols for
estiating load impacts can be used by resource planers and system operators to
incorporate demand-side resources effectvely into wholesale (and retail) markets. One
of the challenges to developing protocols for estiatig load impacts is the diversity of
customers (and their loads) and the heterogeneity in types of DR programs and dynamc
pricing tariffs. In its Order Institutig Rulemaking on DR load impact protocols, the
CPUC (2007) acknowledged that calculatig the load impacts of DR programs is not
easy given the diversity in curtailment strategies, customer characteristics, and DR event
characteristics (e.g., timig, duration, frequency, and location).
This paper describes a statistical analysis of the performance of different models used to
calculate the baseline electic load for buildings participatig in an event-driven
demand-response (DR) program, with emphasis on the importance of weather effects.
During a DR event, a variety of adjustments may be made to building operation, with
the goal of reducing the building peak electric load. In order to determine the actual
peak load reduction, an estimate of what the load would have been without any DR
actions is needed. This is referred to as the baseline load profile or BLP and is key to
accurately assessing the load impacts from certain types of demand response programs
that pay for load reductions.2 The impacts estiate uses the BLP calculated for a specific
i In their report to Congress on Demand Response and Advanced Meterig, FERC Staff identied
the need for consistent and accurate measurement and verifcation of demand response as a key
regulatory issue in order to provide system operators with accurate forecasts and assessments of
demand response, to support just and reasonable rates for the delivery of DR in wholesale
markets, and to accurately measure and veri demand resources that participate in capacity
markets.
2 Note that an explicit customer baselie calculation is not as important if the DR program design
requires customers to reduce usage to a "fi load" level (e.g. an interruptible/curtailable tariff)
(KEMA 2007).
1
tie period on the event-day. This calculation should ideally account for all those
factors which are known to systematically impact the building load at any given
moment, such as weather, occupancy, and operation schedules.
The sample of buildings included in this study is mainly commercial (e.g., office and
retail) and institutional (e.g. schools, universities, governent) buildings. There are a
few industrial facilities including a bakery, electronics manufacturing, laboratories and
large mixed-use office/data center. Historically, many utilities have marketed
emergency DR programs and interruptible/curtailable tariffs to large industrial facilities
with process loads or onsite generation. The mix and type of industries has changed in
California and other states due to the growth in light industry, high technology (e.g.
computer electronics, bio-technology), commercial office space, the institutional sector,
and retail services. As DR programs contiue to evolve, it is important that the program
rules and protocols for determining load impacts take into account the increasingly
diverse types of customers that can participate in DR programs.
The BLP methods discussed in this study are most relevant for non-residen~ial buildings
and have not been broadly evaluated for relevance to industrial facilities. DR events are
called during times of system stress, which are also typically related to weather. For
California, DR may be used in the summer to deal with high peak loads on weekdays,
which are often driven by space cooling in buildings. This study looks at results for
buildings participating in an Automated Demand Response pilot sponsored by the PIER
Demand Response Research Center3 (Piette et al 2007; Piette et al 2005) and who face a
critical peak price. In these cases DR events are only called on normal working days,
during the period 12 pm. to 6pm. Weather-sensitivity is likely to be especially important
during DR events.
Accurate BLP estiates help ensure that individual participants in DR programs are
fairly compensated as part of settlement procedures for their actual load reductions, and
that the contribution of demand response resources in aggregate is properly accounted
for in resource planing and benefit cost screening analysis. In both cases it is important
to avoid systematic bias in estimating the load reductions. Given the correlation
between temperature and increased building energy use for space conditioning, non-
weather corrected models may under-predict the baseline and therefore systematically
underestiate the response. This can be true even for buildings with large non-weather
responsive loads, if the weather-dependent load is significant relative to the estimated
DR reduction. On the other hand, many customers, load aggregators and DR program
administrators have a strong preference for simpler calculation methods with limited
data requirements that can be used for customer settlement processes. It is useful
therefore to establish how much quantitative improvement is gained by introducing
more complicated calculation methods.
3 The Calorna Energy Commssion's Public Interest Energy Research (PIER) Program sponsors
the DRRC, which is managed by LBNL.
2
Table 1: Sites included in this study
1.1. Project Objectives and Analytical Approach
In this study we evaluate seven BLP models, for a sample of 32 sites in California
incorporating 33 separately metered facilities. In some cases the meter may include
electricity use for multiple buildings at one location. Such is the case, for example with
the High School and the Office/Data Center. For each BLP model, we tested two
implementations: models without and with a morning adjustment (which incorporates
site usage data from the morning of the DR event prior to load curtailment). The site
locations, building types and associated weather data sites are listed in Table 1. The
majority of the sites in the dataset are commercial buildings, but the analytical methods
we develop here can be applied to any building type. For each site, IS-minute electric
interval load data are available through the web-based customer energy metering site
maintained by Pacific Gas and Electrc (PG&E). While the models differ in the details,
3
each uses electric load data from a period before the event to predict the electric load on
an event day.
Our mai objective in this work is to provide a statistically valid evaluation of how well
each BLP model performs, and to relate the performance to more general building
characteristics. To do so, we need to define both the sampling procedure and the
evaluation metrics. Building loads always have a random component, so the baseline
estimation problem is inherently statistical in the sense that to properly assess the
performance of a method, a sufficiently large sample of applications must be considered.
Because our building sample is small, to develop a large enough data set, we define a set
of proxy event days (days on which no curtailment occurs and the load is known, but
which are similar in terms of weather to actual event days). For these days, we use the
historical data and the BLP model to predict the load, and compare the prediction to the
actual load for that day. If the proxy event set is large enough, we can evaluate each
model for each site separately. We focus on metrics that quantify the bias and the
accuracy of the model at the building leveL.
1.2. Prior Work
Several recent studies have reviewed and analyzed alternative methods for calculatig
DR peak load reductions, either as part of workig groups or evaluations of California
DR Programs using customer load data (KEMA 2003, Quantum 2004, Quantum 2006).
The most extensive review of BLP methods is provided in the KEMA (2003) study
Protocol Development for Demand Response Calculation - Findings and Recommendations. This
study examined a number of methods in use by utiities and ISO's across the country,
and evaluated them in terms of accuracy and bias. As noted there, a BLP method is
defined by specifying three component steps:
. A set of data selection criteria,
. An estimation method,
. An adjustment method.
The difference between the estimation and the adjustment step is that estiation uses
data prior to the event day to predict the BLP during the event period, while adjustment
uses data from the event day, before the beginning of the curtailment period, to align
and shift the predicted load shape by some constant factor to account for characteristics
that may affect load on the day of the event.
The KEMA 2003 report, while quite comprehensive, included only three accounts from
California in their total sample of 646 accounts. There are 32 accounts from the
Northwest and 24 from the Southwest, so the sample is dominated by data from the
eastern U. S. Given signficant climatic and demographic variation across the country,
with corresponding differences in building practices, occupancy, etc., it is unclear how
well results really generalize across different regions. In particular, the .KEMA study
found that explicitly weather-dependent models did not generally outperform models
4
that did not include weather. One of the goals of this work is to determe whether this
hypothesis also holds true for California.
Quantum Consulting (2004) conducted an analysis of methods to estiate customer
baselines as part of its broader evaluation of Calorna's 2004 DR programs targeted at
industrial and commercial customers. The baseline assessment had biling data for a
large sample (450 customers) of non-participants that were eligible for the DR program;
customers' peak demand ranged from 200 kW to greater than 5 MW. The sample was
weighted appropriately to represent the population of eligible customers. Eight proxy
event days were selected for each utility from the period July I, 2003 to August 31,2003.
These event days were classified in to three categories: high load (potential event days),
low load (as potential "test" days), and consecutive high load days (series of three high
load days that occurred back-to-back). Ths study coupled with subsequent analysis of
load impacts in the Quantum (2006) evaluation provides a more detailed analysis of the
bias and accuracy of BLP methods for large industral and commercial buildings located
in California.
In developing the statistical sample of test profiles, KEMA (2003) and Quantum (2004)
used a large number of accounts, but a relatively small number of calendar days,
comprised of only those days where an actual curtaient was called in the region (as in
KEMA) or proxy event days (as in Quantum). Our statistical approach is different,
using a much larger selection of proxy event days. This allows us to create a statistical
picture for each building, which is useful both because our building sample is smaller,
and because we can then evaluate whether diferent methods perform equally well for
different building types.
The methods investigated in this study overlap with the KEMA (2003) and Quantum
(2004) reports, with a somewhat diferent approach to adjustment for weather effects.
We have also developed a different method for estiating the degree of weather-
sensitivity of a building, and different diagnostics to quantify the predictive accuracy of
the BLP, and the estimated peak load savings values that are used in bil settlement. The
metric used for measuring the bias of the BLP is similar to that used by Quantum (2004).
We also provide detailed results for the baseline model that is currently in wide use in
California, based on a simple average of the hourly load over the highest 3 of the
previous 10 days in the sample. Some of the baseline models tested in Quantum (2004)
are the same as those included in this study (e.g. lO-day unadjusted and 10-day
adjusted). Our approach to testig BLP models that include an Adjustment Factor is
similar to the Quantum (2004) study, although the number of hours and tie period (e.g.
day of vs. day ahead) used for calculatig the adjustment factors is different.
The Quantum evaluation reports (2004 and 2006) and a subsequent article based on
those reports by Buege et al. (2006) conclude that the 10-day adjusted BLP is
significantly better than the currently used 3-day unadjusted BLP in California.
Specifically, the authors assert that the 3-day unadjusted BLP method is biased high by
two to four times. They also find that the presence of large customers with highly
variable load can add considerable uncertainty to the estimation of baselines.
5
The remaider of the paper is organized as follows: In Section 2 we present an overview
of the technical steps involved in pr~paring the data sets, defining the sample of proxy
event days, running the models and developing the diagnostics. In Section 3 we
describe our weather sensitivity metrics, and in Section 4 we defie each of the methods
investigated in ths paper. Section 5 presents the results for our building sample. In
Section 6 we provide a discussion of the limitations of the analytical approach used here,
and outline some suggestions for future work.
2. Data Processing and Evaluation Metrics
In ths section we describe the preparation of the data, the mechanics of implementing
different models, and the diagnostic metrics used in this report.
2.1. Data Sources
The building load data used in this project consists of I5-minute electric interval load
data for each metered building, which we convert to hourly by averaging the values in
each hour. We use data from May through October of 2005 and 2006 to define the
sample days and test the methods. Only the warm-weather months are included here,
as these are the periods when (to date) events are more likely to be called in California's
DR Programs. The amount of data available depends on how long the account has
participated in the DR program (in some cases interval meters were installed because the
site was wiling to go onto a DR program), and whether there is any missing data during
the sample period.
The explicit weather models require hourly temperature data for each site. The data
were obtained by assigning each site to a weather station that is currently active and
maintained by either a state or a federal agency. A website developed at the University
of California at Davis (www.ipm.ucdavis.edu/WEATHER) provides maps of the
weather monitoring stations maintained by various entities for each county in California.
These are used to select the weather station closest (both geographically and in
elevation) to each site. The sites were chosen from those maintained by NOAA
(available by subscription) or by the California Irrigation Management Information
System (CIMIS), which is a program of the state Department of Water Resources. Only
outdoor dry bulb air temperature data are used currently in developing the weather-
dependent models.
2.2. Proxy Event Days
The goal of using proxy event days is to have a large sample set for which (i) the actual
loads are known and (ii) the days are similar in some sense to the actual DR event days
that were called by the CAISO and California utilties in 2005 and 2006. Before selecting
the proxy set, we first need to define the set of what we call admissible days, which is the
set of days that can be used as input to the BLP model calculations. We define
6
admissible days as normal workig days, i.e. elimnatig weekends, holidays and past
curtailment events, which follows standard procedures.
The proxy event days are selected as a subset of the admissible days. DR events are
typically called on the hottest days, and can be called independently in each of several
climate zones defined by the CEC (all the sites available for this study are located in
either zone 1 or zone 2, as indicated in Table 1). To define the weather characteristics
associated with an event day, we first constrct a spatially-averaged zonal hourly
temperature time series, using a simple average over the weather stations located in the
zone. The hourly zonal temperatures are then used to construct three daily metrics: the
maximum daily temperature, the average daiy temperature, and the daily cooling
degree hours (using 65 OF as the base temperature).
Sortig the weather data on the value of the daily metric provides a list of the hottest to
coolest days in the sample period. We defined the proxy event days as the top 25
percent of the admissible days sorted in this manner. The three metrics give consistent
results for the hottest days, but select slightly different samples. A litte over % of the
actual event days in each year are included in the top 25 percent selected.4 The results
presented here use the sample associated with cooling-degree hours. 5 For each
building, a proxy event day is included in the analysis only if there is sufficient load data
for that day. Hence, the proxy event sets vary somewhat from building to building. On
average, this procedure leads to about 60 proxy days for each site.
2.3. Model Runs and Diagnostics
In our procedure, model results are calculated for all the admissible days, but
diagnostics are calculated only for the set of proxy event days. For each model and each
building site, the BLP for each hour from 9 am-6 pm is calculated. While the event
period is limited to 12 pm-6 pm, the adjustment factors may require model and actual
data from the early morning period. Our notation is as follows:
. the admissible day is labeled d
. the hour is labeled h; our convention is h = tie at the beging of the hour
. the predicted load is pHd,h)
. the actual load is aHd,h)
. the adjustment factor for day d is c(d)
4 It is possible that a metric based on the deviation of the daily value from a monthy average
would capture the rest of the event days, however it is also the case that event days may not be
entirely determed by the daiy temperature.
S The results do not appear to be sensitive to the daily metrc used to defie the sample. Note that
the proxy event days are defied purely from temperature data, so there is one set for each zone.
7
. the absolute difference between the actual load and the predicted load is defined
as x(d,h) = aUd,h) - pUd,h)
· the relative difference between the actual load and the predicted load (or percent
error) is defined as e(d,h) = x(d,h)/aUd,h)
For each combination of a model and a site we calculate the absolute and relative
difference between predicted and actual loads, x( d,h) and e( d,h), for each proxy event
day and each hour in the event period, which gives us about 360 observations for each
building site. Our statistical metrics are defined for these sets of numbers.
Often, utilities or I50s settle payments for performance during DR events based on the
average hourly load reduction during the hours of the event. It is therefore useful to
compare the prediction of the average hourly load to the actual value. To do so we
define:
. A(d) = 0: aUd,h) ~ the actual hourly load averaged over the event period
. P(d) = 0: pUd,h) ~ the predicted hourly load averaged over the event period
· X(d) = A(d) - P(d) the absolute difference in average event-period hourly load
. E(d) = X(d)/A(d) the percent difference in average event-period hourly load
2.3.1.Adjustment Factors
As noted in the KEMA 2003 report, the algorith for predicting a customer's load shape
includes a modeling estimation step and an adjustment step. In our analysis, we
evaluate each model both with and without a morning adjustment factor applied. The
KEMA report reviews several methods for calculating the adjustment factor. Most are
based on some comparison of the actal to the predicted load in the hours imediately
preceding an event. In this study, we use a multiplicative factor defined as the ratio of
the actual to the predicted load in the two hours prior to the event period:6
c(d) = ( aI(d,h=10) + a1(d,h=ll) i!r pUd,h=10) + pUd,h=ll) i.
To adjust the BLP, we multiply the predicted value in each hour by the daily adjustment
factor:
pl(d,h) = c(d) * pUd,h).
The Adjustment Factor essentially scales the customer's baseline from admissible days
to the customer's operating level on the actual day of a DR event.
6 Quantum (2004 and 2006) studies use the three hours preceding the event period.
7 Deciding on the period to use for the Adjustment Factor can be more problematic for DR
programs or tariffs where the event is anounced on prior days (e.g. Critical Peak Pricig), as
there may be some concern about customers "gaming" their baselie by intentionally increasing
consumption durig the hours just prior to the event. Quantum (2004) addressed ths issue by
8
We also tested an alternative adjustment approach that used the two hours preceding
the event to defie an additive, rather than multiplicative, correction factor. In our
sample, there is no signficant difference in the results.
2.3.2.Diagnostic Measures
For each BLP model, both with and without adjustment, and each site, we calculate the
set of absolute and percentage errors x(d,h) and e(d,h). Our evaluation of the
performance of a model is based on the statistical properties of these errors. To measure
any bias in the modeL, we calculate the median of the distribution of errors.S If the
method is unbiased the median wil be zero. If the median is positive (negative) it
means that the model has a tendency to predict values smaller (larger) than the acmal
values. To quantify the accuracy of the model, we calculate the average of the absolute
value of the error terms (I e( d,h) I or I x( d,h) I). These metrics can also be applied to the
average event-period values Xed) or E(d).
3. Weather Sensitivity
Weather sensitivity is a measure of the degree to which building loads are driven
directly by local weather. By far the most important weather variable is temperamre.
Physically, space-conditioning loads are affected by the total heat transfer to the
building from the environment, which is affected by such details as the orientation and
shading of the building, shell characteristics, thermal mass, cooling and ventilation
strategies, and occupant behavior. In modelig baseline energy consumption, the
cooling load in a given hour is related to some kid of weighted integral of the
temperamre over an earlier set of hours, with the weightig and the number of hours
depending on the specific building. Practically, weather dependence is often
represented by using regression models relatig hourly load to hourly temperamre,
possibly including lagged variables or more complex functions of temperamre. The
KEMA 2003 report investigated a number of weather regression models, some fairly
complicated, but it is not clear from that smdy that including additional variables leads
to a consistent improvement in the accuracy of the models tested. In some climates
humidity may be an important factor in weather sensitivity, but for sites in California,
weather behavior is likely to be dominated by dry bulb outdoor air temperamre (OAT).
The models tested here are based on straightforward correlation of hourly load with
selectig the three hours prior to the utility notiing customers of an event on the prior weekday.
For purposes of our analysis, we have included the two preceding hours prior to a CPP event on
the same day for the Adjustment Factor.
S The median of a set of numbers is the value such that one hal of the set is greater than the
median, and one half of the set is less than the median. The average value of the error could also
be used as a bias measure, however the median tends to be more robust as it is not sensitive to
outlers.
9
hourly OAT. This approach effectively rolls all other building-specific factors into the
regression coefficients.
To develop an a priori sense of whether a building is likely to be weather sensitive, we
use a simple and robust correlation function known as Spearman Rank Order
Correlation (ROC) (Press et aL. 2007). Given two tie series (X(t), yet)) of equal lengt M,
the ROC is obtained by (1) replacing each variable with its rank relative to the rest of the
set and (2) calculating the linear correlation coefficient between the two sets of ranks.
While the distributions of the X and Y variables may be unknown, the ranks are
distributed uniformly on the interval (I,M). Thus, the ROC can be calculated explicitly
without approximation, along with the associated statistical significance. The ROC
coefficient is insensitive to the size of hourly variation in X and Y, and measures only the
degree to which they tend to rise and fall together. Ths makes it more straightforward
to compare correlation magnitudes across different types of buildings. The ROC should
also provide a more robust measure of weather sensitivity for buildings with highly
variable loads.
For each site, we calculate the ROC between load and temperature for each hour
separately for all the admissible days. We calculate an ROC coefficient in each hour
separately to avoid spurious correlations driven by the daily work schedule. The
average of these calculated values during event period hours is shown in Table 2. These
have been color-coded to indicate high (::= 0.8), medium (0.65-0.8), low (0.5-0.65) and
very low (0:0.5) degrees of correlation. We also calculate an average coefficient over all
the hours, which is used as an overall indicator for the building. In all cases except two
the significance is greater than 95%. The two exceptions are the Schooll and Schoo12
sites, which also show negative correlation coefficients. These schools are closed from
mid-June to September. The algorithm works correctly for these sites, but what it picks
out is an anti-correlation between load and temperature and a strong random
component.
10
Site Name
Retail6
Supermarket
Retail4
Offce/LM5
Retail3
Retail5
Offce2
Offce3
Offce4
Offce/DC3
Offce/Lab2
Retail1
Offce/LM7
Offce 1
Offce/DC1
Offce/DC2
Detention Facilty
Retail2
Offce/LM1
Offce/LM2
Offce/LM4
Offce/Lab1
Offce/LM8
Offce/Lab3
Museum
Offce/LM3
Offce5
Offce/LM9
Offce/LM6
Bakery
School1
School2
Table 2: Hourly rank order correlation (ROC) coeffcients
0.34
0.47 0.49 0.10
0.43 0.45 0.43 0.47 0.45 0.47 0.43 0.43 0.46
0.39 0.42 0.42 0.43 0.42 0.40 0.37 0.38 0.34
0.30 0.34 0.35 0.37 0.35 0.38 0.39 0.38 0.39
0.16 0.16 0.21 0.18 0.18 0.18 0.16 0.16 0.15
0.07 0.10 0.07 0.02 -0.01 -0.06 -0.05 -0.06 -0.01
-0.12 -0.04 0.00 0.01 0.01 0.03 -0.07 -0.09 -0.13
-0.24 -0.12 -0.12 -0.19 -0.17 -0.24 -0.33 -0.34 -0.35
4. Baseline Profile (BLP) Models
We tested seven baseline models for our sample of buildings, with and without the
morning adjustment factor applied. These models can be loosely categorized into two
groups: (1) averaging methods, which use some linear combination of hourly load values
from previous days to predict the load on the event day (models 1 through 4), and (2)
explicit weather models, which use a formula based on local hourly temperature to predict
the load (models 5 through 7). The methods are summarized in Table 3, and described
in more detail below. To improve the readabilty of the results tables, we have given
each model a code (BLP1 through BLP7). For the version of the model with no morning
adjustment factor applied we append an n to the code. For example, BLP1 refers to the
simple average model with morning adjustment, and BLP1n refers to the simple average
with no adjustment.
11
Table 3: Summary of BLP models evaluated
Code Description
BLP1 10-Day simple average baseline with morning adjustment
BLP2 Wêightêôaverage formuláúsingprevious 20 admissible days with morning adjustment
BLP3 Simple average over the highest 3 out of 1 0 previous admissible days with morning
adjustment
BLP3n Sirnpleaverage over the highest 39L!t of10 previous admissible days with()L!t morning
adiustment
BLP4 Simple average over the highest 5 out of 10 previous admissible days with morning
adiustment
BLP5 Seasonal regression baseline with morning adiustment
BLP6 1 O-dav regression baseline with morning adiustment
BLP7 Limited seasonal regression baseline with morning adjustment
4.1. 10-Day Simple Average Baseline with Morning Adjustment (BLP1)
In simple averaging, the average of the hourly load over the N most recent
admissible days before the event is used to predict the load on the event day.
Typically, N is set equal to 10, which is the value used in our analysis. Note that
averaging wil tend to under-predict the load by definition. Both BLP1 and
BLP1n (without morning adjustment) were also tested in the Quantum (2004)
study.
4.2. Weighted Average Baseline with Morning Adjustment (BLP2)
In recent regulatory discussions on load impact estimation protocols, EnerNOC
has proposed a recursive formula to predict the load on day d from predictions
over a set of N previous days (EnerNOC 2006). This is equivalent to a weighted
average of actual loads over the previous N days, with weights defined by:
pl(d,h) = 0.1*( sum( m=0,N-1 ) (0.9)ff * al(d-m,h)) + (0.9)N * al(d-N,h)
We applied EnerNOC's proposed BLP using 20 previous days.
4.3. Simple Average over the Highest 3 out of 10 Admissible Days with
Morning Adjustment (BLP3)
In this model, the 3 days with the highest average load during the event period
12pm-6pm are selected from the previous 10 days, and the simple average of the
load over these three days is calculated for each hour. The unadjusted version
12
(BLP3n), is the baseline method currently used in Californa's Demand Bidding
and Critical Peak Pricing programs9 and was also tested in Quantum (2004).
4.4. Simple Average over the Highest 5 out of 10 Admissible Days with
Morning Adjustment (BlP4)
This method is similar to BLP3, except the highest five days are used.
4.5. Seasonal Regression Baseline with Morning Adjustment (BlP5)
In this method, we use a year's worth of data to calculate the coefficients of a
linear model: pl(d,h) = C1(h) + C2(h)*temperature(d,h). The coefficients are
calculated using linear regression. We have calculated two separate sets of
coefficients, using 2005 data and 2006 data. The coeffcients differ slightly, and
the 2006 values are used here. All the admissible days from May through
October are used. Ths is the Linear Regression with Seasonal Coefficient
method.
4.6. 10-Day Regression Baseline with Morning Adjustment (BlP6)
This method uses a linear regression model as defied for BLP5, but the
coefficients are calculated using only data from the N most recent admissible
days prior to the event period. In this analysis we set at N equal to 10.
4.7. Limited Seasonal Regression with Morning Adjustment (BlP7)
This method is a variation of BLP5. Here, in calculatig the regression
coefficients, instead of using all the admissible days from May through October,
we use only those hours for which the temperature is greater than or equal to
60°F. The results for this model do not differ significantly from those for model
BLP5, and are not included in the tables.
5. Results
As an ilustration of how the actal and estimated load profiles look, Figure 1 shows
data for an office building in Fremont, for a summer day in 2006. The plot shows the
9 The Caliorna Capacity Bidding Program uses a different version of the BLP3n model, in which
the selection of the highest 3 of 10 days is based on analysis of the total load for all the sites
included in the portfolio of a load aggregator. For the methods evaluated in ths study, the 3
highest days are chosen separately for each individual site/facility. Ths approach is commonly
used by other u.s. ISO/RTOs (e.g., NYISO, PJM, ISO-NE) in their DR programs. We believe that
calculatig customer baseline load profies from individual participatig faciities is likely to
enhance customer acceptance and tranparency because individual customers can determine and
verif their baselie load profie and load reductions (compared to the aggregator portfolio
approach in which the CBL depends on the usage patterns of al other customers).
13
estiated BLP for method BLP3 with adjustment and method BLP3n with no
adjustment, and the actual load. The model values are calculated for all the hours from 9
am to 6 pm, and the values for the beginning at 10 am and 11 am are used to calculate
the morning adjustment factor. In this particular case, the unadjusted prediction is
below the actual load, so the adjustment boosts the load profile upward.
The percent error in the estimate is the ratio of the difference between the actual and
estiated load, divided by the actual load. For this example, the actual hourly load is on
the order of 300 kwh. The BLP3n prediction is roughly 30 kwh below the actual, so the
percent error in model BLP3n is about +10%. This error is slightly larger during the
afternoon, high load period. The difference from the adjusted BLP3 profile is roughly 5-
15 kwh during the event period, so the adjustment reduces the error to roughly 5%.
Our statistical analysis is based on calculations of profies like the one ilustrated in
Figure 1 for all sites, all proxy event days and all models. For a given site and model,
the performance of the model is characterized by the average size of the absolute percent
error over all proxy event days, and whether there is a bias towards predominantly
positive or negative errors. .
Figure 5-1: Example results for models BlP3n and BlP3
360
340
.~ 320
i300o..
~ 280~o
:; 260
~
l! 240cr
220
200
9 10 11 12 13 14 15 16 17 18
Hour Beginning
1__ Actual -. BLPJn -- BLPJI
5.1. Building Characteristics
An examination of some general characteristics of our sample of buildings is very
helpful in interpretig the results of our analysis of BLP models. The characteristics we
14
use are the weather sensitivity (discussed above) and the load variabilty of each
building in the sample. In this context, load variability refers to how different the load
profiles are from one day to another, which will affect the degree to which the loads on a
given day can be predicted from previous data.
There are a variety of ways of measurig the load variabilty. In Figure 2, we show one
approach, where for each building site the miimum, maximum and average hourly
load are plotted. The sites are labeled in Figure 2 by building type, and the order on the
horizontal axis is determined by sortig the average loads from largest to smallest. Note
that the vertical axis uses a logarithmic scale. This plot shows that while for most sites
variabilty is moderate, for several sites the variabilty exceeds two orders of magnitude.
In these cases, the building was essentially "turned off" for some part of the sample
period (for example, the Museum is closed on Mondays).
Figure 5-2: Maximum, minimum and average hourly load at each site
10000
l
'C 1000..0..
gict
'CC..
)C..100:¡
C¡
10
i3 ¡~'"~'"'"i ..II '"gl '"'"1D l ..ig '"~:¡'"
íi
'"i i¡'i0.i 0 ii :;:;.0 ii i!ii .0 ii :;c
~
c -'il -'J!.5 J!.5 ~
-'-'0 0
ã;""..ã;i¡"ìl ìl "i¡....i¡..ã;i¡5 ìl
-'....'"'".,0 '"'"'"ìl 0 E IL 0 '"ìl 0 ìl u uilililuiui
is i¡i¡i¡8.c IE i¡i¡IE00a00"~0 0 0 a 0ui..
~
Building Type
I-+ Lmin -å Lavg -- Lmax I
To quantify the variabilty, we use a simple measure based on the deviation of the load
in each hour from an average calculated over all the admissible days. The deviation is
defined as the average value of the difference between the load in a given hour and the
period average load for that hour. Ths is converted to a percent deviation by dividing
by the period average. Ths variabilty coefficient can take on any value greater than
zero, with low values indicating low variability. In order to derive a single value for
each facility in our sample, we average the values calculated for each hour. Facilities are
15
classified as either high or low variabilty. The cutoff is chosen at 15 percent. We also
classify building weather sensitivity as either high or low, with the cutoff set at an ROC
coefficient of 0.7. Using this segmentation scheme, we dis aggregate our sample of
facilities into four categories, as shown in Table 4.
In our sample there are three buildings with non-standard schedules, shown in the table
in italics. Two are schools that are closed during the summer as noted above. The third
is a museum that is closed on Mondays and most Tuesdays. Although these schedules
are perfectly predictable, they deviate from the assumption that normal operatig days
are Monday through Friday year-round. This results in an artificially high level of
variabilty in load (and corresponding reduced estimate of weather sensitivity) for these
sites.
Table 4: Classification by load variabilty (var) and weather sensitivity (ws)
Site Name
Retail6
Retail4
Offce2
Offce3
Offce/LM7
Offce/LM1
Offce/LM4
Offce/LM8
*Museum
Ofce/Lab3
Offce5
Offce/LM9
Offce/LM6 -_."--
*Scfi"oo/1 "...~
*§cho~~-~-"""
5.2. Morning Adjustment
Overall, we find that the morning adjustment factor substantially improves the
performance of each baseline model; both in terms of reduced bias and improved
accuracy (see Figures 3 and 4). In Figure 3, we show the average of the absolute errors
between predicted and actual load, which is our accuracy measure, for each site using
the BLP3/BLP3n (highest 3 of 10) modeL. The sites are labeled by name, and have been
ordered along the x-axis according to the category they belong to with respect to
variabilty and weather sensitivity. The category order is high-high, high-low, low-high,
and lastly low-low. The shaded bars are for the model with no morning adjustment
applied, and the white bars with the morning adjustment. The vertical axis limts are
16
chosen to ensure that all the data are visible, and as a result one of the unadjusted values
is off the chart. This plot shows that for almost al the sites, and in particular for the high
variabilty sites, the morning adjustment leads to a large improvement in the accuracy of
the model prediction. For cases where the adjustment does not improve the result (for
example, Detention Facility) use of the adjustment does not substantially degrade the
model performance.
Figure 5-3: Error magnitude for model BLP3 without and with adjustment
40~e. 35..
e 30..w
- 25c
~ 20IID. 15l10!
c: 5
a ~MN CVW~Nwm EMrov~~~vMN~~NN~~M~MN~ ~~ ~ ~~==~ ~~~ ~ ~~~~ ~ ~ ~uu= ~i== ~=u~~ i m
~~~i~~~~~~ ~i~~~~~~~~~~~~~ ~~~~~~Æ~ c ~~~~~~~ ~~ ~~ ~ ~~~~
Category Variabiltyleather Sensitivity
I- m3n (no adjustment) 0 m3 (with adjustment) I
The results in Figure 3 ilustrate the decrease in magnitude of errors between predicted
and actual load when the morning adjustment is applied. In Figure 4, we provide a
slightly more complicated representation of the effect of applying the morning
adjustment factor, using data from all BLP models and all sites. It ilustrates the impact
of the adjustment on the likelihood that the model wil have a small (less than 5%) error.
Each point on the chart represents a single building-model pair. It is constructed as
follows:
1. For a site and a model with no adjustment applied we calculate the probability
that the absolute value of the error I e( d,h) I is less than 5%.
2. For a site and a model with the adjustment applied we calculate the probabilty
that the absolute value of the error I e( d,h) I is less than 5%.
17
3. The probabilty calculated in case (2) is plotted against the probability in case (1).
4. The diagonal is shown on the plot as a heavy dark line.
5. A linear trend line passing through (0,0) is also plotted in black.
The diagonal corresponds to a situation where the morning adjustment has no effect on
the likelihood of a small error. If a point lies above the diagonal it means that the
probability of a small error is larger when the adjustment is used. The fact that most
points are above the diagonal means that in most cases the morning adjustment
increases the probability that the error wil be small. The linear fit shows that on
average, for a given model-site pairing, the probabilty of small error is increased by
about 25 percent when the morning adjustment is applied. There is broad scatter in the
plot, indicating that some cases are improved a great deaL, where as others are improved
only slightly. Below the diagonal, there are a few cases where the adjustment factor
produces worse results, but in general these differences are small.
Figure 5-4: Comparison: probabilty of error less than 5% with
or without morning adjustment
i:
!100en:::a 90
oc 80.c~70~
Ja 60
CD 50"0 40:I..30.g
~20
:e 10
ca 0.Q0 0..Q.
.
y = 1.27x
R2 = 0.40
20 40 60 80 100
Probabilty for Models with No Adjustment
I · all models -diagonal-Linear(all models) I
We have observed two situations where building or facility operating issues are likely to
be misrepresented with morning adjustments. These are related to demand response
end-use strategies that begin prior to the start of the DR event, and are important for
18
day-ahead or other pre-notication DR programs. The first situation is when pre-
cooling is done only on DR event days, and not on normal days. If the chiler load is
higher than normal on the mornng of a DR event day, the baseline load wil be adjusted
to a higher value than if the pre-cooling had not occurred. The adjustment reflects a
demand response strategy, not the fact that the day is hotter than normaL. In the second
situation, we have observed industrial demand response strategies that involve reducing
the end-use loads one to two hours prior to the begining of the DR event. This is done
because some industrial loads take time to "unload". In ths case the morning load is
lower than it would have been in the absence of a DR event, so the morning adjustment
wil scale the baseline down more than is appropriate. These issues suggest that some
information about the building DR strategies would be very useful in assessing whether
and how a morning adjustment should be applied to a baseline modeL.
5.3. Bias and Accuracy
The next two tables present our analysis of the relative bias and accuracy among the
various BLP models that we tested in our sample of buildings. Table 5 provides results
for the distribution of hourly percent errors e(d,h) between predicted and actual load,
while Table 6 shows the same metrics for the distrbution of daiy values of the percent
error in the average event-period hourly load E(d). The bias is measured using the
median of the sample of values, and the accuracy is measured by the average of the
absolute value of the error. We present only the percent error data as these are easiest to
compare across buildings. In Tables 5 and 6, the best and worst performing models for
each building, are highlighted in blue and grey shading respectively. The table rows are
sorted on the categories for variability (var) and weather sensitivity (ws). The three sites
with anomalous schedules (the two schools and the museum) are noted in italics.
In the table of results for the hourly values e( d,h) we present both model BLP3 and
model BLP3n (highest 3 of 10 with and without the adjustment applied), as the current
practice in California is to use the BLP3n method (no adjustment).
19
siteOfce2 h : h
OOce3"'-~'~'~ .....~,~-
ÖeteïïÙonFaCiÍity h: h
OfcelLMY"" h'Tjï
Retafí4~"~~""""h""h'
RetaTì6~"~"-' '''''''''''''''''_VA'''_. --~'Ah ¡ h'Schoo/1 h : '....'MUseuri~'"' h: i*scFili~"~"'~ h, i"
6ffce7Gîì:3''''''' li' -4.7: .Ofce51îTT "":1'1--0
...,~,.......,'".....,.....,... . .., .......""1......'........£fcelLM1 " ,!!~~, ".:.~..4L:!~OfcelLM4 h 'i -2.71 -2.9
6ffce;Trvl6~'" ....jïTI'...''.1JJl.:r3
OOcelLM8 "~~ "hl'-::~4r:O:a".".-, , ."...........\OfcelLM9 h I -2.9: -3.1Offce1 I h -2.4OOëe4~'" . 'r'h"::¡:§OOëtiLab2"---"lh 0.71 0.6
~~!~il:¡~~"===~, ~"L=lï,:, ~~l:-04. 'Retail2 I h -0.71 -0.9
õffêe7öC1 "'ìïîril f3
SupeÏÏark et """IhOOceli':M5 i . h
OOce7oc2"'T''7'ïî
ófce/DC3 ,--,-"'-"" """"""'" '''''T'''1'-'h
Re¡aif3"""w-w----~----~ ,.,., ""T"~r-h~
RetaIr5"'w'-----,.w.""~""=...~ ....,' '''''ì''''T"wìiOfcelLab1 I i
OOce7LM:r~
ooce7LM3
Bakery
With respect to the bias indicator, both the BLP3 and the BLP6 models perform well
(BLP6 is the load-temperature model based on the 10 previous days of data). The
weather-dependent BLP6 model is distiguished by the fact that it is the only model that
consistently avoids bias in our sample of buildings. For the accuracy metric it is clear
that the unadjusted 3-inlO model BLP3n is the least accurate. Table 5 also shows that,
for buildings with low variabilty, all models (except BLP3n) perform reasonably well,
which is not surprising. For buildings with high weather sensitivity, overall the explicit
weather models (BLP5 and BLP6) either improve the performance for that building or
do not affect it much.
20
Table 6: Metrics for the average hourly load percent error E(d)
Offce/LMS
Offce/DC2
Offce/DC3
Retail3
RetailS
Ofce/Lab1
offce/LM:. "
Offce/LM3
Bakery
Table 6 is similar to Table 5, except that the error metrics are derived for the sample of
event-period average hourly load differences E(d). We have also removed the BLP3n
column from this table. The BLP3n model is clearly the least accurate, and by removing
it we can get a sense of which of the BLP models is best/worst when all the models
include the morning adjustment factor. For the bias measure, the results are similar to
Table 5. This is to be expected, as averagig is a linear operation, which is unlikely to
strongly affect the median results. The BLP5 model (seasonal load-temperature) tends
to be the most biased in our sample of buildings. In the accuracy metric, no model
stands out as clearly worse or better than the others. It is interesting to note that the
BLP5 model is frequently both the best and the worst. The building load categorizations
are reasonably good at predicting performance, with BLP5 performing poorly for "h-l"
buildings (high load variabilty and low weather sensitivity) and well for "l-h" (low
variabilty and high weather sensitivity facilities). The "h-h" and "1-1" sample sizes are
21
small, so one should be careful in drawing conclusions from these data. They do
suggest that, as noted above, for buildings in the "I-i" category all models perform
reasonably well. For the "h-h" category, the BLP6 model (load temperature based on 10
days of data) consistently avoids bias. It is not clear from this data if explicit weather
models out-perform averaging models in this category.
5.4. Event Day Shed Load Estimates
150s or utiities with DR programs use BLP models to estimate the customer load
reduction achieved from changes to building operation during DR events. The
reduction is defined as the estimated baseline value minus the actual (presumably
curtailed) value. For this analysis, we have used models to predict electric loads on DR
event days for sites that showed some signficant demand reductions (these are itemized
in Piette et al 2007 and Piette et al 2005). Figure 5 shows the estiated load reductions
for each site and event day in the data set. For clarity, only a few representative BLP
models are shown. We include three models: BLP3n represents current practice in
California's Demand Bidding program, BLP6 is an example of an explicit weather model,
and BLP3 is the preferred model for most of the facilities in our sample, which includes
a representative day approach with a same-day morning adjustment.
Load shed estimates are defined as the difference between the estimated average event
period hourly load and the measured (curtailed) event-period average hourly load. The
results are expressed in percentage terms (Le., estiated shed load during an event
divided by the actual average hourly load). The data in Fig. 5 are sorted on the value of
the predicted shed for the BLP3 model (highest 3 of 10 with morning adjustment). We
exclude sites for which no BLP model predicts a shed of greater than 10%. Note that in
some cases the BLP model baseline values for a site are lower than the actual load; these
negative values are included in Figure 5. This leads to about 85 building-event day
records in the data set. From Figure 5, it is clear that the BLP3n model (no mornng
adjustment) generally predicts lower values for the sheds than BLP3, i.e. the morning
adjustment raises the value of the predicted baseline and hence of the load reduction.
The load-temperature (BLP6) model results are scattered around the line defined by the
BLP3 results.
22
Figure 5-5: Predictions of the shed load for event days in California 2005 and 2006
i 70%
60%Il 50%l 40%'l
-g 30%
S
l 20%
l
10%
0%
~-10%l -20%
ii
J-
l-~..- _+~+-+T..+T...rJ-+.++-~- +-_0 -~-l
+ +~ -i ~~ Lb 'P · -- -" e-- -i: 4l..+..~ T - · +dJ..~ ...+-.T ++' ++.+-.:te- -/--1+ ++.+i +'~ o.C"4t+Df,;-T+ -.+ --t++++++
++,
Event Days 2005 and 2006 - All Sites
I 0 BLP1 . BLP3 + BLP3n - BLP61
It is also useful to compare the aggregate load reductions for our sample of buildings
predicted by the different baseline load profie models. The aggregate is defined as the
total over all buildings participatig in the DR event on a given day. Figure 6 shows the
estimated total load reduction from buildings that participated in DR events in June and
July 2006. Eight to ten sites participated in these eleven events as part of PG&E's critical
peak pricing tariff; events covered six hours (noon - 6 pm). Not all sites participated in
every event, but Figure 6 shows the sum of the participant load reductions for the
facilities listed in Tables 5 and 6. The average of the maximum hourly outside dry bulb
temperatures for each site is also shown; average peak temperatures ranged from the
mid-80's OF to about 100 of.
Our analysis of this sample of buildings that actually reduced load during DR events
suggests the following key results.
· First, for each DR event, the BLP3n model (highest 3 of previous 10 days with no
morning adjustment) estiates the lowest level of demand response and actually
shows a net negative response in 3 of the 11 events.
· Second, the negative load reductions with the BLP3n model often occur on the
hottest days.
The lowest negative aggregated load reduction took place on July 24th during a
severe heat wave in Californa where DR events were called for several days during
a second week of record high temperatures. While there may be some "participant
fatigue" in the load reductions from these 8 to 10 sites on those hot days, the other
three baseline models show 400 to 500 kW of reduction. Interviews with the facility
23
managers at these sites indicated that they continued to implement their DR
strategies during this heat wave but their load reductions were not revealed by the
existing BLP approach used in the critical peak pricing tariff (i.e. BLP3n approach).
This also ilustrates a problem that occurs with all averagig methods during multi-
day events. Because event days are excluded from the set of admissible days, an
averaging method wil calculate the same unadjusted baseline for every event day if
there are events on consecutive days. The adjustment factors wil differ on each day
during the event, but because of alterations to the building operation induced by the
event, the morning loads used to calculate the adjustment may no longer be
representative of the normal correlation of that building's load with that day's
weather. Explicit weather models do not have this problem.
Figure 6: Aggregate estimated load reduction by baseline model
1400 120
1200
- - - - --...- -..100~---
1000 _._._--.~
..-
800 80 ..
~
.e.!c 600 ~~60 L:i E"i!~400 ,,""CIIl0.J 200 40 l0
0
~..~-c ~,ç #,,~'O "fYr:,,~20
.200
.400 0
DR Event Day
I--SLP1 __BLP3 ~BlP3n ~BLP6 _...- -AwragepeakTempl
. Third, in general the three models that include a morning adjustment (BLPl,
BLP3, and BLP6) show load reductions that average at three to five times larger
than the CBL model without the morning adjustment (BLP3n).
This result ilustrates the problem of using the BLP3n model for commercial buildings
during heat waves when the previous days in the baseline were not as hot as the DR
event days, and the morning adjustment factor is no longer representative of typical
load-weather correlations.
. Fourth, the results from this research on baseline models to assess DR load
impacts in commercial buildings stands in sharp contrast to previous work in
California by Buege et al (2006). Buege et al found that the 3 in 10 day baseline
model with no morning adjustment (BLP3n) produced the highest estiates of
24
customer baseline and the largest savings estiates for the California demand
bidding and CPP tariffs. However, the load impacts from the sample of sites
that Buege et al evaluated were domiated by a relatively small number of large
industnal customers.tO In contrast, our results suggest that for weather-sensitive
commercial/institutional customers in California, the 3 in 10 day baseline model
(BLP3n) produces estiates of the customer's baseline that are biased on the low
side, which results in estimated load curtailments that are biased on the low side.
6. Conclusions and Suggestions for Further Work
We believe that the methods used in this study provide a statistically sound approach to
evaluating the performance of different BLP models for a building or set of buildings,
provided sufficient histoncal data are avaiable. The results indicate in general that:
1. The BLP3n model currently used by Californa utilities to estiate load reductions in
several of their DR programs could be improved substantially if a morning
adjustment factor were applied for commercial and institutional buildings.ll
2. Applying a morning adjustment factor significantly reduces the bias and improves
the accuracy of all BLP models examined in our sample of buildings.
3. Characterization of building loads by variability and weather sensitivity is a useful
screening indicator that can be used to predict which types of BLP models wil
perform well. We believe that DR program admiistrators can use the analytic
techniques described in ths study to charactenze and possibly screen participating
customer's loads.
4. In our sample, BLP models that incorporate temperature (e.g. explicit weather
models) improve accuracy of the estiated baseline loads, and in cases where it
doesn't improve the accuracy it has relatively little impact.
5. Explicit weather models (in particular, the lO-day version BLP6) are the only model
type that consistently avoids bias in the predicted loads in our sample of buildings.
10 We believe that large industrial customers account for most of the load impacts in the
Californa Demand Bidding and CPP evaluation study conducted by Buege et al, because of their
load shapes (Le., high nightte loads) (Buege et al 2006). Industrial facities may have
nighttime electric loads that are twenty to thrty percent lower, or even greater than daytie peak
loads. By contrast, the priarily commercial and institutional sector participants in our sample
of California buildigs all have night tie loads that are tyically a factor of three lower than
peak hour electric loads.
11 DR baselies are used to estiate load reductions in both the Caliorna Demand Biddig
program and CPP tariff for resource plang and B/C screenig analysis. The Demand Bidding
Program also uses a BPL method to determine payments to customers for their load reductions as
part of a settlement process.
25
6. For customer accounts with highly variable loads, we found that no BLP model
produced satisfactory results, although averaging methods perform best in accuracy
(but not bias). These types of customers are difficult to characterize with standard
baseline load profile models that rely on historic loads and weather data. Because
the DR potential and performance in actual DR events for facilities with more
variable loads is harder to predict, measure, and evaluate, it may make more sense to
direct these facilities to enroll in DR programs with rules that require customers to
reduce load to a firm service level or guaranteed load drop (e.g.
in terru ptible/ curtailab Ie tariffs).
7. For buildings with low load variabilty all BLP models perform reasonably well in
accuracy.
8. Similarly, customers that are highly weather sensitive, should be given the option of
using BLP models that explicitly incorporate temperature in assessing their
performance during DR events.
9. Many DR programs apply similar DR BLP methods to both commercial and
industrial sector (C&I). The results of our study when combined with results of
other recent studies (Quantum 2004 and 2006, Buege et aL., 2006) suggests that DR
program administrators should have flexibilty and multiple options for suggesting
the most appropriate BLP method for specific types of customers. Key load
characteristics to be considered in BLP methods are weather-sensitivity (which is an
issue for many commercial and institution buildings but not common in industrial
process loads) and variabilty of loads.
Suggestions for Future Work
From our detailed examination of both the data and the model predictions, we can also
suggest some new approaches that are reasonably straightforward and could improve
the utility of a given modeL. Below is a list of specific suggestions for future work.
1. For many sites the seasonal load-temperature model (BLP5) is either the best or
worst performer. From the data, it is fairly clear that a linear load-temperature
relationship is crude, and simply changing to a quadratic fittig function may
substantially improve the model performance.
2. Application of the methods developed here to a larger sample of buildings, covering
a wider geographical area, would be very useful in determining the robustness of the
results. The calculation methodologies are fully automated, so larger data sets could
be handled without significant additional effort.
3. The weather data provided by NOAA and CIMIS may occasionally contain
erroneous values, which produce outliers (large errors) in the model predictions.
We have not screened for weather data errors in our analysis, as we wanted to
evaluate the methods as they are currently used by DR program administrators in
Californa. To screen for consistency in the weather data is technically
straightforward, but burdensome if each program participant has to do it on their
26
own. Given the large number of state agencies that use weather data, and the
extensive infrastrcture that already exists for collectig and maintaiing it, it
should be feasible to provide DR program participants with access to weather
information that is periodically screened and updated. This would greatly facilitate
the use of explicit weather models.
4. Some buildings have predictable but non-standard schedules (for example, closed
Mondays, closed in summer etc.) Including ths scheduling information in the
selection of the admissible set would reduce the variabilty in the load data, and
therefore improve BLP model performance. Technically, because the admissible day
selecton process used by utilities and ISs tyically screens for weekends etc., it
should be simple to add additional building-specifc criteria.
5. Our data set of proxy events is similar to but not the same as the actual event day set
in California, and in particular contains milder weather days than is typical for real
events. It may also be useful to investigate whether using a more restricted proxy
event set (e.g., the highest 10% of days in temperature instead of the highest 25%)
would significantly impact the resuits.J
References
(Buege 2006) Buege, A, M. Rufo, M. Ozog, D. Violette, and S. McNicoll 2006. "Prepare
for Impact: Measuring Large CII Customer Response to DR Programs," 2006 ACEEE
Summer Study on Energy Efficiency in Buildings, Monterey, CA August.
(CPUC 2007) California Public Utilties Commission 2007. "Order Instituting
Rulemaking Regarding Policies and Protocols for Demand Response Load Impacts
Estimates, Cost-Effectiveness Methodologies, Megawatt Goals and Alignment with
California Independent System Operator Market Design Protocols," OIR-07-0l-041,
January.
(EnerNOC 2006) Kozikowski, D., A. Breidenbaugh and M. Potter 2006. The Demand
Response Baseline, v.l.75. EnerNOC OPS Publication.
(FERC 2006) FERC Staff Report 2006. Assessment of Demand Response and Advanced
Metering, Docket Number AD-06-2-000, Aug.
(KEMA 2003) Goldberg M.L and G. Kennedy Agnew 2003. Protocol Development for
Demand-Response calculations: Findings and Recommendations. Prepared for the California
Energy Commission by KEMA-Xenergy. CEC 400-02-017F.
12 If sufficient time periods of data are available, ths is easy to implement. For example, if 2007
data were added to our data set, the top 10% of days with high temperatures would provide a
large enough sample to do ths type of analysis.
27
(KEMA 2007) Goldberg M.L., Customer Baselines for Demand Response Programs.
Presentation to Midwest Demand Response Initiative DR Program Design Sub-group,
July 9.
(Piette et aI, 2007) Piette, M.A Piette, D. S. Watson, N. Motegi and S. Kilicotte 2007.
Automated Critical Peak Pricing Field Tests: 2006 Program Description and Results. Draf
Report.
(Piette et al, 2005) Piette, M. A, D. S. Watson, N. Motegi and N. Bourassa 2005. Findings
from the 2004 Fully Automated Demand Response Tests in Large Facilities. LBNL Report
Number 58178.
(PJM 2007) Amended and Restated Operating Agreement of PJM Interconnection LLC,
effective June 12 2007, Section 3.3.A
(Quantum 2004) Working Group 2 Demand Response Program Evaluation - Program Year
2004 Final Report. Prepared for the Working Group 2 Measurement and Evaluation
Committee, by Quantum Consultig Inc. and Summit Blue Consultig, LLC, 2004.
(Quantum 2006) Evaluation of 2005 Statewide Large Nonresidential Day-ahead and Reliability
Demand Response Programs. Prepared for Southern California Edison and the Working
Group 2 Measurement and Evaluation Committee, by Quantum Consultig Inc. and
Summit Blue Consulting, LLC, 2006.
28
BEFORE THE
IDAHO PUBLIC UTILITIES COMMISSION
CASE NO. IPC-E-09-02
IDAHO POWER COMPANY
RESPONSE TO STAFF'S
PRODUCTION REQUEST NO. 15
8/
1
1
2
0
0
l
N
o
m
:
9
6
2
5
Me
t
e
r
B
a
s
e
l
i
n
e
D
i
f
f
e
r
e
n
c
e
%
u
n
d
e
r
%
o
f
n
o
m
7/
3
1
1
2
0
0
l
N
o
m
:
9
4
2
5
Me
t
e
r
B
a
s
e
l
i
n
e
D
i
f
f
e
r
e
n
c
e
%
u
n
d
e
r
%
o
f
n
o
m
71
3
0
/
2
0
0
8
N
o
m
:
9
4
2
5
Me
t
e
r
B
a
s
e
l
i
n
e
D
i
f
f
e
r
e
n
c
e
%
u
n
d
e
r
%
o
f
n
o
m
Ev
e
n
t
E
S
-
E
S
+
O
:
3
0
37
%
10
3
%
16
,
9
7
4
27
,
5
7
0
10
,
5
9
6
38
%
11
0
%
17
,
7
3
9
28
,
1
7
5
10
,
4
3
6
37
%
11
1
%
16
,
1
5
3
26
,
2
2
2
10
,
0
6
9
38
%
10
7
%
ES
+
O
:
3
0
-
E
S
+
1
:
o
o
36
%
10
2
%
17
,
7
6
4
27
,
6
7
1
9,9
0
7
36
%
10
3
%
17
,
5
4
3
28
,
2
7
6
10
,
7
3
3
38
%
11
4
%
16
,
2
9
6
26
,
3
2
1
10
,
0
2
5
38
%
10
6
%
ES
+
1
:
0
0
-
E
S
+
1
:
3
0
33
%
92
%
18
,
0
9
1
27
,
3
0
6
9,2
1
6
34
%
96
%
18
,
0
7
4
27
,
9
1
1
9,
8
3
8
35
%
10
4
%
16
,
6
5
4
25
,
9
5
9
9,
3
0
5
36
%
99
%
ES
+
1
:
3
0
-
E
S
+
2
:
0
0
32
%
88
%
17
,
3
6
5
26
,
5
6
5
9,2
0
0
35
%
96
%
17
,
7
0
1
27
,
1
7
0
9,
4
6
9
35
%
10
0
%
17
,
0
5
2
25
,
2
2
4
8,
1
7
2
32
%
87
%
ES
+
2
:
0
0
-
E
S
+
2
:
3
0
32
%
87
%
17
,
0
5
6
26
,
0
6
9
9,0
1
3
35
%
94
%
17
,
7
3
2
26
,
6
7
4
8,
9
4
2
34
%
95
%
17
,
4
0
1
24
,
7
3
6
7,
3
3
5
30
%
78
%
ES
+
2
:
3
0
-
E
S
+
3
:
0
0
30
%
80
%
17
,
1
2
0
25
,
2
7
8
8,1
5
9
32
%
85
%
17
,
3
2
3
25
,
8
8
3
8,
5
6
0
33
%
91
%
17
,
7
5
6
23
,
9
5
7
6,
2
0
1
26
%
66
%
ES
+
3
:
o
o
-
E
S
+
3
:
3
0
29
%
74
%
17
,
0
0
24
,
4
1
6
7,
4
1
1
30
%
77
%
17
,
0
0
4
25
,
0
2
1
8,
0
1
6
32
%
85
%
16
,
6
0
5
23
,
1
1
6
6,
5
1
1
28
%
69
%
ES
+
3
:
3
0
-
E
S
+
4
:
0
0
28
%
73
%
16
,
5
6
24
,
1
4
0
7,
5
7
7
31
%
79
%
17
,
0
9
9
24
,
7
4
5
7,
6
4
6
31
%
81
%
16
,
4
4
6
22
,
8
6
5
6,
4
1
8
28
%
68
%
ES
+
4
:
o
o
-
E
S
+
4
:
3
0
28
%
68
%
16
,
1
2
0
22
,
4
9
0
6,
3
6
9
28
%
68
%
ES
+
4
:
3
0
-
E
S
+
5
:
o
o
28
%
66
%
16
,
1
0
5
22
,
2
8
5
6,
1
8
0
28
%
66
%
Ev
e
n
l
l
t
&v
e
n
t
En
_
_
C
_
r
r
r
t
c
.
n
l
l
y
__
_
P
N
M
P
e
a
k
s
a
v
e
r
E
v
e
n
t
S
u
m
m
a
r
y
,
7
/
3
1
1
0
8
-.
.
.
.
.
.
a,o
k
W
i
PN
M
P
e
a
k
S
a
v
e
r
E
v
e
n
t
S
u
m
m
a
r
y
,
8
/
1
/
0
8
15
,
G
O
k
W
ao
,
o
O
k
W
2S
,
O
O
O
k
W
2O
,
O
O
k
W
15
,
o
O
k
W
10
,
O
O
O
k
W
I,
o
O
k
W
Ok
W
12
:
8
1
2
:
3
0
1
:
0
0
1
:
1
0
2
:
0
0
i
:
1
:
0
s
:
i
4
:
0
0
4
.
'
1
1
:
0
1
:
1
1
:
1
0
t
1
7
:
0
0
7
:
3
1
:
0
~
~
~
M
M
N
~
~
~
.
~
~
N
M
M
N
~
M
No
T
.
.
__
e
o
m
f
f
l
f
C
n
K
l
_M
_
-.
.
.
.
"
.
"_
k
W
26
,
o
I
i
W
2O
,
O
O
I
l
W
'I
,
o
a
a
k
W
10
,
8
k
W
',
O
O
k
W
Ok
W
12
:
0
1
Z
:
i
1
:
0
1
:
1
2
:
0
2
:
H
3
:
0
1
:
1
4
:
4
:
i
o
5
:
0
1
:
3
0
1
:
0
1
:
1
7
:
0
0
PM
P
M
P
M
P
M
P
M
P
M
P
M
P
M
P
M
P
M
P
M
P
M
P
M
P
M
P
M
PN
M
P
e
a
k
S
a
v
a
r
E
v
e
n
t
S
u
m
m
a
r
y
,
7
/
3
0
1
0
8
_
_
e
o
"
'
'
'
I
f
C
.
"
_M
_
-_
.
.
.,O
O
O
k
W
IO
,
O
G
C
k
W
"_
k
W
2O
,
O
l
k
W
1l
,
O
l
i
W
10
,
O
O
O
k
W
'_
k
W
Ok
W~~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~~
~
~
~
~
~
~
~
#
~
~
~
#
~
~
~
~
~
61
1
7
1
2
0
0
8
No
m
:
76
2
0
61
1
6
1
2
0
0
8
No
m
:
76
2
0
Me
t
e
r
Ba
s
e
l
i
n
e
D
i
f
f
e
r
e
n
c
e
%
u
n
d
e
r
%o
f
n
o
m
Me
t
e
r
Ba
s
e
l
i
n
e
D
i
f
f
e
r
e
n
c
e
%
u
n
d
e
r
%
of
no
m
c3
%
-8
%2%
30
%
14
,
6
5
5
21
,
4
5
1
6,
7
9
6
32
%
89
%
13
,
3
6
1
21
,
0
4
7,6
8
2
37
%
10
1
%
15
,
1
5
1
21
,
6
0
4
6,
4
5
3
30
%
85
%
13
,
4
0
7
21
,
2
9
7
7,8
9
0
37
%
10
4
%
15
,
5
2
3
21
,
5
7
7
6,
0
5
4
28
%
79
%
14
,
6
6
7
21
,
0
4
0
6,3
7
3
30
%
84
%
15
,
3
9
2
21
,
8
3
1
6,
4
4
0
29
%
85
%
15
,
0
1
7
20
,
6
1
3
5,5
9
6
27
%
73
%
14
,
9
6
3
21
,
5
7
4
6,
6
1
1
31
%
87
%
14
,
2
0
1
20
,
3
7
4
6,1
7
3
30
%
81
%
14
,
9
8
2
21
,
1
4
7
6,
1
6
5
29
%
81
%
13
,
8
0
1
19
,
7
7
1
5,9
7
0
30
%
78
%
15
,
8
6
5
20
,
9
0
8
5,
0
4
3
24
%
66
%
15
,
5
6
20
,
3
0
5
4,
7
4
5
23
%
62
%
PN
M
P
e
a
k
s
a
v
e
r
Ev
e
n
t
S
u
m
m
a
r
y
,
6
1
1
7
1
0
-
-
C
o
m
l
"
"
c
a
c
l
l
J
--
-
,
E"
.
n
t
S
t
a
r
t
&v
.
n
t
E
n
d
21
.
o
I
l
W
2O
.
o
0
0
k
W
1l,
O
O
O
k
W
10
,
o
I
i
W
',
O
O
O
k
W
'k
W
11
:
0
,
,
:
s
1
2
:
0
1
2
:
l
1
:
0
1
:
1
2
:
0
2
:
1
1
:
0
i
;
S
,
,
:
0
4
:
S
.
.
0
i
:
s
_
:
0
l
:
7
:
0
MM
~
~
~
~
~
N
~
~
~
~
~
~
~
M
~
PN
M
P
e
k
S
a
v
e
r
E
v
e
n
t
S
u
m
m
r
y
,
6
1
1
6
1
8
Nc
T
i
m
ZS
,
O
O
O
k
W
Ev
e
n
t
En
d
ZO
,
O
O
O
k
W
lS
,
O
O
O
k
W
10
.
O
Ð
O
k
W
S,O
O
O
k
W
'k
W
1~
~
1
.
:
s
2
_
:
s
~
:
s
~
:
s
~
~
.
.
:
s
7
1
~M
~
P
M
P
M
~
M
~
M
~
M
M
M
M
~
61
6
1
2
0
0
8
N
o
m
:
7
6
2
0
Me
t
e
r
B
a
s
e
l
i
n
e
D
i
f
e
r
e
n
c
e
%
u
n
d
e
r
%
o
f
n
o
m
11
,
6
5
3
12
,
1
4
2
10
3
%
10
1
%
19
,
4
9
8
19
,
8
0
0
7,8
4
5
7,6
5
8
40
%
39
%
~:
:
"
"
c
.
a
c
P
N
M
P
e
a
k
s
a
v
e
r
E
v
e
n
S
u
m
r
y
,
6
1
6
1
8
-_
.
.
-
-
e
o
l
n
d
c
.
d
f
-M
_
-.
.
w
i
"
'
26
.
.
k
W
No
t
T
i
m
En
n
t
S
t
.
.
__
E
n
20
,
0
0
k
W
1&
.
0
k
W
10
,
o
k
W
I,O
O
O
k
W