historical_inputs modules¶
The module provides tools for accessing historical market data and preprocessing for compatibility with the SpotMarket class.
xml_cache¶
Exceptions:
Raise for unable to downloaded data from NEMWeb. |
Classes:
|
Class for accessing data stored in AEMO’s NEMDE output files. |
-
class
nempy.historical_inputs.xml_cache.
XMLCacheManager
(cache_folder)¶ Class for accessing data stored in AEMO’s NEMDE output files.
Examples
A XMLCacheManager instance is created by providing the path to directory containing the cache of XML files.
>>> manager = XMLCacheManager('test_nemde_cache')
- Parameters
cache_folder (str) –
Methods:
find_intervals_with_violations
(limit, …)Find the set of dispatch intervals where the non-intervention dispatch runs had constraint violations.
Get the unit unconstrained intermittent generation forecast.
Get generic constraints lhs term interconnector coefficients.
Get generic constraints lhs term regional coefficients.
Get generic constraints rhs values.
Get generic constraints type.
Get generic constraints lhs term unit coefficients.
Get the price of violating different constraint sets.
Get the filename of the currently loaded interval.
Get the file path to the currently loaded interval.
Get the bid availability of market interconnectors.
Get the unit fast start dispatch inflexibility parameter values.
Get the initial conditions of units at the start of the dispatch interval.
Get the unit volume bids
Get the total volume violation of different constraint sets.
Check if the cache contains the data for the loaded interval, primarily for debugging.
Check if the interval currently loaded was subject to an intervention.
load_interval
(interval)Load the data for particular 5 min dispatch interval into memory.
-
load_interval
(interval)¶ Load the data for particular 5 min dispatch interval into memory.
If the file intervals data is not on disk then an attempt to download it from AEMO’s NEMweb portal is made.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
- Parameters
interval (str) – In the format ‘%Y/%m/%d %H:%M:%S’
- Raises
MissingDataError – If the data for an interval is not in the cache and cannot be downloaded from NEMWeb.
-
interval_inputs_in_cache
()¶ Check if the cache contains the data for the loaded interval, primarily for debugging.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.interval_inputs_in_cache() True
- Returns
- Return type
bool
-
get_file_path
()¶ Get the file path to the currently loaded interval.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_file_path() PosixPath('test_nemde_cache/NEMSPDOutputs_2018123124000.loaded')
-
get_file_name
()¶ Get the filename of the currently loaded interval.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_file_name() 'NEMSPDOutputs_2018123124000.loaded'
-
get_unit_initial_conditions
()¶ Get the initial conditions of units at the start of the dispatch interval.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_unit_initial_conditions() DUID INITIALMW RAMPUPRATE RAMPDOWNRATE AGCSTATUS 0 AGLHAL 0.000000 NaN NaN 0.0 1 AGLSOM 0.000000 NaN NaN 0.0 2 ANGAST1 0.000000 NaN NaN 0.0 3 APD01 0.000000 NaN NaN 0.0 4 ARWF1 54.500000 NaN NaN 0.0 .. ... ... ... ... ... 283 YARWUN_1 140.360001 NaN NaN 0.0 284 YWPS1 366.665833 177.750006 177.750006 1.0 285 YWPS2 374.686066 190.125003 190.125003 1.0 286 YWPS3 0.000000 300.374994 300.374994 0.0 287 YWPS4 368.139252 182.249994 182.249994 1.0 [288 rows x 5 columns]
- Returns
Columns:
Description:
DUID
unique identifier of a dispatch unit,
(as str)
INITIALMW
the output or consumption of the unit
at the start of the interval, in MW,
(as np.int64),
RAMPUPRATE
ramp up rate of unit as repoted by the
scada system at the start if the
interval, in MW/h, (as np.int64)
RAMPDOWNRATE
ramp down rate of unit as repoted by the
scada system at the start if the
interval, in MW/h, (as np.int64)
AGCSTATUS
flag to indicate whether the unit is
connected to the AGC system at the
start of the interval, 0.0 if not and
1.0 if it is, (as np.int64)
- Return type
pd.DataFrame
-
get_unit_fast_start_parameters
()¶ Get the unit fast start dispatch inflexibility parameter values.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_unit_fast_start_parameters() DUID MinLoadingMW CurrentMode CurrentModeTime T1 T2 T3 T4 0 AGLHAL 2 0 0 10 3 10 2 1 AGLSOM 16 0 0 20 2 35 2 2 BARCALDN 12 0 0 14 4 1 4 3 BARRON-1 5 4 1 11 3 1 1 4 BARRON-2 5 4 1 11 3 1 1 .. ... ... ... ... .. .. .. .. 69 VPGS5 48 0 0 5 3 15 0 70 VPGS6 48 0 0 5 3 15 0 71 W/HOE#1 160 0 0 3 0 0 0 72 W/HOE#2 160 0 0 3 0 0 0 73 YABULU 83 0 0 5 6 42 6 [74 rows x 8 columns]
- Returns
Columns:
Description:
DUID
unique identifier of a dispatch unit,
(as str)
MinLoadingMW
in MW, (as np.int64)
CurrentMode
The dispatch mode if the unit at the
start of the interval, for mode
definitions
see AEMO doc
,(as np.int64)
CurrentModeTime
The time already spent in the current
mode, in minutes, (as np.int64)
T1
The total length of mode 1, in minutes
(as np.int64)
T2
The total length of mode 2, in minutes
(as np.int64)
T3
The total length of mode 1, in minutes,
(as np.int64)
T4
The total length of mode 4, in minutes,
(as np.int64)
- Return type
pd.DataFrame
-
get_unit_volume_bids
()¶ Get the unit volume bids
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_unit_volume_bids() DUID BIDTYPE MAXAVAIL ENABLEMENTMIN ENABLEMENTMAX LOWBREAKPOINT HIGHBREAKPOINT BANDAVAIL1 BANDAVAIL2 BANDAVAIL3 BANDAVAIL4 BANDAVAIL5 BANDAVAIL6 BANDAVAIL7 BANDAVAIL8 BANDAVAIL9 BANDAVAIL10 RAMPDOWNRATE RAMPUPRATE 0 AGLHAL ENERGY 173.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 60.0 0.0 0.0 160.0 720.0 720.0 1 AGLSOM ENERGY 160.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 170.0 480.0 480.0 2 ANGAST1 ENERGY 43.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 50.0 0.0 0.0 0.0 50.0 840.0 840.0 3 APD01 LOWER5MIN 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 300.0 0.0 0.0 4 APD01 LOWER60SEC 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 300.0 0.0 0.0 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 1021 YWPS4 LOWER6SEC 25.0 250.0 385.0 275.0 385.0 15.0 10.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1022 YWPS4 RAISE5MIN 0.0 250.0 390.0 250.0 380.0 0.0 0.0 0.0 0.0 5.0 0.0 0.0 5.0 0.0 10.0 0.0 0.0 1023 YWPS4 RAISEREG 15.0 250.0 385.0 250.0 370.0 0.0 0.0 0.0 0.0 0.0 0.0 5.0 10.0 0.0 5.0 0.0 0.0 1024 YWPS4 RAISE60SEC 10.0 220.0 400.0 220.0 390.0 0.0 0.0 0.0 0.0 0.0 5.0 5.0 0.0 0.0 10.0 0.0 0.0 1025 YWPS4 RAISE6SEC 15.0 220.0 405.0 220.0 390.0 0.0 0.0 0.0 10.0 5.0 0.0 0.0 0.0 0.0 10.0 0.0 0.0 [1026 rows x 19 columns]
- Returns
Columns:
Description:
DUID
unique identifier of a dispatch unit,
(as str)
BIDTYPE
the service the bid applies to,
(as str)
MAXAVAIL
the bid in unit availablity, in MW, (as str)
ENABLEMENTMIN
in MW, (as np.float64)
ENABLEMENTMAX
in MW, (as np.float64)
LOWBREAKPOINT
in MW, (as np.float64)
HIGHBREAKPOINT
in MW, (as np.float64)
BANDAVAIL1
the volume bid in the first bid band, in MW, (as np.float64)
:
BANDAVAIL10
the volume bid in the tenth bid band, in MW, (as np.float64)
RAMPDOWNRATE
the bid in ramp down rate, in MW/h, (as np.int64)
RAMPUPRATE
the bid in ramp up rate, in MW/h, (as np.int64)
- Return type
pd.DataFrame
-
get_UIGF_values
()¶ Get the unit unconstrained intermittent generation forecast.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_UIGF_values() DUID UIGF 0 ARWF1 56.755 1 BALDHWF1 9.160 2 BANN1 0.000 3 BLUFF1 4.833 4 BNGSF1 0.000 .. ... ... 57 WGWF1 25.445 58 WHITSF1 0.000 59 WOODLWN1 0.075 60 WRSF1 0.000 61 WRWF1 15.760 [62 rows x 2 columns]
- Returns
Columns:
Description:
DUID
unique identifier of a dispatch unit,
(as str)
UGIF
the units generation forecast for end
of the inteval, in MW, (as np.float64)
- Return type
pd.DataFrame
-
get_violations
()¶ Get the total volume violation of different constraint sets.
For more information on the constraint sets
see AMEO docs
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_violations() {'regional_demand': 0.0, 'interocnnector': 0.0, 'generic_constraint': 0.0, 'ramp_rate': 0.0, 'unit_capacity': 0.36, 'energy_constraint': 0.0, 'energy_offer': 0.0, 'fcas_profile': 0.0, 'fast_start': 0.0, 'mnsp_ramp_rate': 0.0, 'msnp_offer': 0.0, 'mnsp_capacity': 0.0, 'ugif': 0.0}
- Returns
- Return type
dict
-
get_constraint_violation_prices
()¶ Get the price of violating different constraint sets.
For more information on the constraint sets
see AMEO docs
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_violation_prices() {'regional_demand': 2175000.0, 'interocnnector': 16675000.0, 'generic_constraint': 435000.0, 'ramp_rate': 16747500.0, 'unit_capacity': 5365000.0, 'energy_offer': 16457500.0, 'fcas_profile': 2247500.0, 'fcas_max_avail': 2247500.0, 'fcas_enablement_min': 1015000.0, 'fcas_enablement_max': 1015000.0, 'fast_start': 16385000.0, 'mnsp_ramp_rate': 16747500.0, 'msnp_offer': 16457500.0, 'mnsp_capacity': 5292500.0, 'uigf': 5582500.0, 'voll': 14500.0}
- Returns
- Return type
dict
-
is_intervention_period
()¶ Check if the interval currently loaded was subject to an intervention.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.is_intervention_period() False
- Returns
- Return type
bool
-
get_constraint_rhs
()¶ Get generic constraints rhs values.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_rhs() set rhs 0 #BANN1_E 32.000000 1 #BNGSF2_E 3.000000 2 #CROWLWF1_E 43.000000 3 #CSPVPS1_E 29.000000 4 #DAYDSF1_E 0.000000 .. ... ... 704 V_OWF_NRB_0 10000.001000 705 V_OWF_TGTSNRBHTN_30 10030.000000 706 V_S_NIL_ROCOF 812.280029 707 V_T_NIL_BL1 478.000000 708 V_T_NIL_FCSPS 425.154024 [709 rows x 2 columns]
- Returns
Columns:
Description:
set
the unique identifier of the generic
constraint, (as str)
rhs
the rhs value of the constraint,
(as np.float64)
- Return type
pd.DataFrame
-
get_constraint_type
()¶ Get generic constraints type.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_type() set type cost 0 #BANN1_E LE 5220000.0 1 #BNGSF2_E LE 5220000.0 2 #CROWLWF1_E LE 5220000.0 3 #CSPVPS1_E LE 5220000.0 4 #DAYDSF1_E LE 5220000.0 .. ... ... ... 704 V_OWF_NRB_0 LE 5220000.0 705 V_OWF_TGTSNRBHTN_30 LE 5220000.0 706 V_S_NIL_ROCOF LE 507500.0 707 V_T_NIL_BL1 LE 5220000.0 708 V_T_NIL_FCSPS LE 435000.0 [709 rows x 3 columns]
- Returns
Columns:
Description:
set
the unique identifier of the generic
constraint, (as str)
type
the type of constraint, i.e ‘=’, ‘<=’ or
’<=’, (as str)
cost
the cost of violating the constraint,
(as np.float64)
- Return type
pd.DataFrame
-
get_constraint_region_lhs
()¶ Get generic constraints lhs term regional coefficients.
This is a compact way of describing constraints that apply to all units in a region. If a constraint set appears here and also in the unit specific lhs table then the coefficents used in the constraint is the sum of the two coefficients, this can be used to exclude particular units from otherwise region wide constraints.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_region_lhs() set region service coefficient 0 F_I+LREG_0120 NSW1 L5RE 1.0 1 F_I+LREG_0120 QLD1 L5RE 1.0 2 F_I+LREG_0120 SA1 L5RE 1.0 3 F_I+LREG_0120 TAS1 L5RE 1.0 4 F_I+LREG_0120 VIC1 L5RE 1.0 .. ... ... ... ... 478 F_T+NIL_WF_TG_R5 TAS1 R5RE 1.0 479 F_T+NIL_WF_TG_R6 TAS1 R6SE 1.0 480 F_T+NIL_WF_TG_R60 TAS1 R60S 1.0 481 F_T+RREG_0050 TAS1 R5RE 1.0 482 F_T_NIL_MINP_R6 TAS1 R6SE 1.0 [483 rows x 4 columns]
- Returns
Columns:
Description:
set
the unique identifier of the generic
constraint, (as str)
region
the regions the constraint applies in,
(as str)
service
the services the constraint applies too,
(as str)
coefficient
the coefficient of the terms on the lhs,
(as np.float64)
- Return type
pd.DataFrame
-
get_constraint_unit_lhs
()¶ Get generic constraints lhs term unit coefficients.
If a constraint set appears here and also in the region lhs table then the coefficents used in the constraint is the sum of the two coefficients, this can be used to exclude particular units from otherwise region wide constraints.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_unit_lhs() set unit service coefficient 0 #BANN1_E BANN1 ENOF 1.0 1 #BNGSF2_E BNGSF2 ENOF 1.0 2 #CROWLWF1_E CROWLWF1 ENOF 1.0 3 #CSPVPS1_E CSPVPS1 ENOF 1.0 4 #DAYDSF1_E DAYDSF1 ENOF 1.0 ... ... ... ... ... 5864 V_ARWF_FSTTRP_5 ARWF1 ENOF 1.0 5865 V_MTGBRAND_33WT MTGELWF1 ENOF 1.0 5866 V_OAKHILL_TFB_42 OAKLAND1 ENOF 1.0 5867 V_OWF_NRB_0 OAKLAND1 ENOF 1.0 5868 V_OWF_TGTSNRBHTN_30 OAKLAND1 ENOF 1.0 [5869 rows x 4 columns]
- Returns
Columns:
Description:
set
the unique identifier of the generic
constraint, (as str)
unit
the units the constraint applies in,
(as str)
service
the services the constraint applies too,
(as str)
coefficient
the coefficient of the terms on the lhs,
(as np.float64)
- Return type
pd.DataFrame
-
get_constraint_interconnector_lhs
()¶ Get generic constraints lhs term interconnector coefficients.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_constraint_interconnector_lhs() set interconnector coefficient 0 DATASNAP N-Q-MNSP1 1.0 1 DATASNAP_DFS_LS N-Q-MNSP1 1.0 2 DATASNAP_DFS_NCAN N-Q-MNSP1 1.0 3 DATASNAP_DFS_NCWEST N-Q-MNSP1 1.0 4 DATASNAP_DFS_NNTH N-Q-MNSP1 1.0 .. ... ... ... 631 V^^S_NIL_TBSE_1 V-SA 1.0 632 V^^S_NIL_TBSE_2 V-SA 1.0 633 V_S_NIL_ROCOF V-SA 1.0 634 V_T_NIL_BL1 T-V-MNSP1 -1.0 635 V_T_NIL_FCSPS T-V-MNSP1 -1.0 [636 rows x 3 columns]
- Returns
Columns:
Description:
set
the unique identifier of the generic
constraint, (as str)
interconnector
the interconnector the constraint applies in,
(as str)
coefficient
the coefficient of the terms on the lhs,
(as np.float64)
- Return type
pd.DataFrame
-
get_market_interconnector_link_bid_availability
()¶ Get the bid availability of market interconnectors.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.get_market_interconnector_link_bid_availability() interconnector to_region availability 0 T-V-MNSP1 TAS1 478.0 1 T-V-MNSP1 VIC1 478.0
- Returns
Columns:
Description:
interconnector
the interconnector the constraint applies in,
(as str)
to_region
the direction the bid availability applies to,
(as str)
availability
the availability as bid in by the
interconnector, (as str)
- Return type
pd.DataFrame
-
find_intervals_with_violations
(limit, start_year, start_month, end_year, end_month)¶ Find the set of dispatch intervals where the non-intervention dispatch runs had constraint violations.
Examples
>>> manager = XMLCacheManager('test_nemde_cache')
>>> manager.load_interval('2019/01/01 00:00:00')
>>> manager.find_intervals_with_violations(limit=3, start_year=2019, start_month=1, end_year=2019, end_month=1) {'2019/01/01 00:00:00': ['unit_capacity'], '2019/01/01 00:05:00': ['unit_capacity'], '2019/01/01 00:10:00': ['unit_capacity']}
- Parameters
limit (int) – number of intervals to find, finds first intervals in chronolgical order
start_year (int) – year to start search
start_month (int) – month to start search
end_year (int) – year to end search
end_month (int) – month to end search
- Returns
- Return type
dict
mms_db¶
Classes:
|
Constructs and manages a sqlite database for accessing historical inputs for NEM spot market dispatch. |
|
Manages retrieving dispatch inputs by SETTLEMENTDATE, where inputs are stored on a daily basis. |
|
Manages retrieving dispatch inputs by EFFECTTIVEDATE and VERSIONNO. |
Manages retrieving dispatch inputs by EFFECTTIVEDATE and VERSIONNO. |
|
|
Manages retrieving dispatch inputs by INTERVAL_DATETIME. |
|
Manages retrieving dispatch inputs by matching against the DISPATCHCONSTRAINTS table |
|
Manages retrieving dispatch inputs by SETTLEMENTDATE. |
|
Manages retrieving dispatch inputs where no filter is require. |
|
Manages retrieving dispatch inputs by START_DATE and END_DATE. |
-
class
nempy.historical_inputs.mms_db.
DBManager
(connection)¶ Constructs and manages a sqlite database for accessing historical inputs for NEM spot market dispatch.
Constructs a database if none exists, otherwise connects to an existing database. Specific datasets can be added to the database from AEMO nemweb portal and inputs can be retrieved on a 5 min dispatch interval basis.
Examples
Create the database or connect to an existing one.
>>> import sqlite3
>>> con = sqlite3.connect('historical.db')
Create the database manager.
>>> historical = DBManager(con)
Create a set of default table in the database.
>>> historical.create_tables()
Add data from AEMO nemweb data portal. In this case we are adding data from the table BIDDAYOFFER_D which contains unit’s volume bids on 5 min basis, the data comes in monthly chunks.
>>> historical.BIDDAYOFFER_D.add_data(year=2020, month=1)
>>> historical.BIDDAYOFFER_D.add_data(year=2020, month=2)
This table has an add_data method indicating that data provided by AEMO comes in monthly files that do not overlap. If you need data for multiple months then multiple add_data calls can be made.
Data for a specific 5 min dispatch interval can then be retrieved.
>>> print(historical.BIDDAYOFFER_D.get_data('2020/01/10 12:35:00').head()) SETTLEMENTDATE DUID BIDTYPE ... T3 T4 MINIMUMLOAD 0 2020/01/10 00:00:00 AGLHAL ENERGY ... 10.0 2.0 2.0 1 2020/01/10 00:00:00 AGLSOM ENERGY ... 35.0 2.0 16.0 2 2020/01/10 00:00:00 ANGAST1 ENERGY ... 0.0 0.0 46.0 3 2020/01/10 00:00:00 APD01 LOWER5MIN ... 0.0 0.0 0.0 4 2020/01/10 00:00:00 APD01 LOWER60SEC ... 0.0 0.0 0.0 [5 rows x 18 columns]
Some tables will have a set_data method instead of an add_data method, indicating that the most recent data file provided by AEMO contains all historical data for this table. In this case if multiple calls to the set_data method are made the new data replaces the old.
>>> historical.DUDETAILSUMMARY.set_data(year=2020, month=2)
Data for a specific 5 min dispatch interval can then be retrieved.
>>> print(historical.DUDETAILSUMMARY.get_data('2020/01/10 12:35:00').head()) DUID START_DATE ... DISTRIBUTIONLOSSFACTOR SCHEDULE_TYPE 0 AGLHAL 2019/07/01 00:00:00 ... 1.0000 SCHEDULED 1 AGLNOW1 2019/07/01 00:00:00 ... 1.0000 NON-SCHEDULED 2 AGLSITA1 2019/07/01 00:00:00 ... 1.0000 NON-SCHEDULED 3 AGLSOM 2019/07/01 00:00:00 ... 0.9891 SCHEDULED 4 ANGAST1 2019/07/01 00:00:00 ... 0.9890 SCHEDULED [5 rows x 9 columns]
- Parameters
con (sqlite3.connection) –
-
BIDPEROFFER_D
¶ Unit volume bids by 5 min dispatch intervals.
-
BIDDAYOFFER_D
¶ Unit price bids by market day.
- Type
-
DISPATCHREGIONSUM
¶ Regional demand terms by 5 min dispatch intervals.
-
DISPATCHLOAD
¶ Unit operating conditions by 5 min dispatch intervals.
-
DUDETAILSUMMARY
¶ Unit information by the start and end times of when the information is applicable.
- Type
-
DISPATCHCONSTRAINT
¶ The generic constraints that were used in each 5 min interval dispatch.
-
GENCONDATA
¶ The generic constraints information, their applicability to a particular dispatch interval is determined by reference to DISPATCHCONSTRAINT.
-
SPDREGIONCONSTRAINT
¶ The regional lhs terms in generic constraints, their applicability to a particular dispatch interval is determined by reference to DISPATCHCONSTRAINT.
-
SPDCONNECTIONPOINTCONSTRAINT
¶ The connection point lhs terms in generic constraints, their applicability to a particular dispatch interval is determined by reference to DISPATCHCONSTRAINT.
-
SPDINTERCONNECTORCONSTRAINT
¶ The interconnector lhs terms in generic constraints, their applicability to a particular dispatch interval is determined by reference to DISPATCHCONSTRAINT.
-
INTERCONNECTOR
¶ The the regions that each interconnector links.
- Type
-
INTERCONNECTORCONSTRAINT
¶ Interconnector properties FROMREGIONLOSSSHARE, LOSSCONSTANT, LOSSFLOWCOEFFICIENT, MAXMWIN, MAXMWOUT by EFFECTIVEDATE and VERSIONNO.
-
LOSSMODEL
¶ Break points used in linearly interpolating interconnector loss funtctions by EFFECTIVEDATE and VERSIONNO.
-
LOSSFACTORMODEL
¶ Coefficients of demand terms in interconnector loss functions.
-
DISPATCHINTERCONNECTORRES
¶ Record of which interconnector were used in a particular dispatch interval.
Methods:
Drops any existing default tables and creates new ones, this method is generally called a new database.
-
create_tables
()¶ Drops any existing default tables and creates new ones, this method is generally called a new database.
Examples
Create the database or connect to an existing one.
>>> import sqlite3
>>> con = sqlite3.connect('historical.db')
Create the database manager.
>>> historical = DBManager(con)
Create a set of default table in the database.
>>> historical.create_tables()
Default tables will now exist, but will be empty.
>>> print(pd.read_sql("Select * from DISPATCHREGIONSUM", con=con)) Empty DataFrame Columns: [SETTLEMENTDATE, REGIONID, TOTALDEMAND, DEMANDFORECAST, INITIALSUPPLY] Index: []
If you added data and then call create_tables again then any added data will be emptied.
>>> historical.DISPATCHREGIONSUM.add_data(year=2020, month=1)
>>> print(pd.read_sql("Select * from DISPATCHREGIONSUM limit 3", con=con)) SETTLEMENTDATE REGIONID TOTALDEMAND DEMANDFORECAST INITIALSUPPLY 0 2020/01/01 00:05:00 NSW1 7245.31 -26.35352 7284.32178 1 2020/01/01 00:05:00 QLD1 6095.75 -24.29639 6129.36279 2 2020/01/01 00:05:00 SA1 1466.53 1.47190 1452.25647
>>> historical.create_tables()
>>> print(pd.read_sql("Select * from DISPATCHREGIONSUM", con=con)) Empty DataFrame Columns: [SETTLEMENTDATE, REGIONID, TOTALDEMAND, DEMANDFORECAST, INITIALSUPPLY] Index: []
- Returns
- Return type
None
-
class
nempy.historical_inputs.mms_db.
InputsBySettlementDate
(table_name, table_columns, table_primary_keys, con)¶ Manages retrieving dispatch inputs by SETTLEMENTDATE.
Methods:
add_data
(year, month)“Download data for the given table and time, appends to any existing data.
Creates a table in the sqlite database that the object has a connection to.
get_data
(date_time)Retrieves data for the specified date_time e.g.
-
get_data
(date_time)¶ Retrieves data for the specified date_time e.g. 2019/01/01 11:55:00”
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = InputsBySettlementDate(table_name='EXAMPLE', table_columns=['SETTLEMENTDATE', 'INITIALMW'], ... table_primary_keys=['SETTLEMENTDATE'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Normally you would use the add_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.
>>> data = pd.DataFrame({ ... 'SETTLEMENTDATE': ['2019/01/01 11:55:00', '2019/01/01 12:00:00'], ... 'INITIALMW': [1.0, 2.0]})
>>> data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)
When we call get_data the output is filtered by SETTLEMENTDATE.
>>> print(table.get_data(date_time='2019/01/01 12:00:00')) SETTLEMENTDATE INITIALMW 0 2019/01/01 12:00:00 2.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.
- Returns
- Return type
pd.DataFrame
-
add_data
(year, month)¶ “Download data for the given table and time, appends to any existing data.
Note
This method and its documentation is inherited from the _MultiDataSource class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MultiDataSource(table_name='DISPATCHLOAD', ... table_columns=['SETTLEMENTDATE', 'DUID', 'RAMPDOWNRATE', 'RAMPUPRATE'], ... table_primary_keys=['SETTLEMENTDATE', 'DUID'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Downloading data from http://nemweb.com.au/#mms-data-model into the table.
>>> table.add_data(year=2020, month=1)
Now the database should contain data for this table that is up to date as the end of Janurary.
>>> query = "Select * from DISPATCHLOAD order by SETTLEMENTDATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con)) SETTLEMENTDATE DUID RAMPDOWNRATE RAMPUPRATE 0 2020/02/01 00:00:00 YWPS4 180.0 180.0
If we subsequently add data from an earlier month the old data remains in the table, in addition to the new data.
>>> table.add_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con)) SETTLEMENTDATE DUID RAMPDOWNRATE RAMPUPRATE 0 2020/02/01 00:00:00 YWPS4 180.0 180.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
year (int) – The year to download data for.
month (int) – The month to download data for.
- Returns
- Return type
None
-
create_table_in_sqlite_db
()¶ Creates a table in the sqlite database that the object has a connection to.
Note
This method and its documentation is inherited from the _MMSTable class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'], ... con=con)
Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.
>>> table.create_table_in_sqlite_db()
Now a table exists in the database, but its empty.
>>> print(pd.read_sql("Select * from example", con=con)) Empty DataFrame Columns: [DUID, BIDTYPE] Index: []
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
-
-
class
nempy.historical_inputs.mms_db.
InputsByIntervalDateTime
(table_name, table_columns, table_primary_keys, con)¶ Manages retrieving dispatch inputs by INTERVAL_DATETIME.
Methods:
add_data
(year, month)“Download data for the given table and time, appends to any existing data.
Creates a table in the sqlite database that the object has a connection to.
get_data
(date_time)Retrieves data for the specified date_time e.g.
-
get_data
(date_time)¶ Retrieves data for the specified date_time e.g. 2019/01/01 11:55:00”
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = InputsByIntervalDateTime(table_name='EXAMPLE', table_columns=['INTERVAL_DATETIME', 'INITIALMW'], ... table_primary_keys=['INTERVAL_DATETIME'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Normally you would use the add_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.
>>> data = pd.DataFrame({ ... 'INTERVAL_DATETIME': ['2019/01/01 11:55:00', '2019/01/01 12:00:00'], ... 'INITIALMW': [1.0, 2.0]})
>>> data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)
When we call get_data the output is filtered by INTERVAL_DATETIME.
>>> print(table.get_data(date_time='2019/01/01 12:00:00')) INTERVAL_DATETIME INITIALMW 0 2019/01/01 12:00:00 2.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.
- Returns
- Return type
pd.DataFrame
-
add_data
(year, month)¶ “Download data for the given table and time, appends to any existing data.
Note
This method and its documentation is inherited from the _MultiDataSource class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MultiDataSource(table_name='DISPATCHLOAD', ... table_columns=['SETTLEMENTDATE', 'DUID', 'RAMPDOWNRATE', 'RAMPUPRATE'], ... table_primary_keys=['SETTLEMENTDATE', 'DUID'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Downloading data from http://nemweb.com.au/#mms-data-model into the table.
>>> table.add_data(year=2020, month=1)
Now the database should contain data for this table that is up to date as the end of Janurary.
>>> query = "Select * from DISPATCHLOAD order by SETTLEMENTDATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con)) SETTLEMENTDATE DUID RAMPDOWNRATE RAMPUPRATE 0 2020/02/01 00:00:00 YWPS4 180.0 180.0
If we subsequently add data from an earlier month the old data remains in the table, in addition to the new data.
>>> table.add_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con)) SETTLEMENTDATE DUID RAMPDOWNRATE RAMPUPRATE 0 2020/02/01 00:00:00 YWPS4 180.0 180.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
year (int) – The year to download data for.
month (int) – The month to download data for.
- Returns
- Return type
None
-
create_table_in_sqlite_db
()¶ Creates a table in the sqlite database that the object has a connection to.
Note
This method and its documentation is inherited from the _MMSTable class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'], ... con=con)
Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.
>>> table.create_table_in_sqlite_db()
Now a table exists in the database, but its empty.
>>> print(pd.read_sql("Select * from example", con=con)) Empty DataFrame Columns: [DUID, BIDTYPE] Index: []
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
-
-
class
nempy.historical_inputs.mms_db.
InputsByDay
(table_name, table_columns, table_primary_keys, con)¶ Manages retrieving dispatch inputs by SETTLEMENTDATE, where inputs are stored on a daily basis.
Methods:
add_data
(year, month)“Download data for the given table and time, appends to any existing data.
Creates a table in the sqlite database that the object has a connection to.
get_data
(date_time)Retrieves data for the specified date_time e.g.
-
get_data
(date_time)¶ Retrieves data for the specified date_time e.g. 2019/01/01 11:55:00, where inputs are stored on daily basis.
Note that a market day begins with the first 5 min interval as 04:05:00, there for if and input date_time of 2019/01/01 04:05:00 is given inputs where the SETTLEMENDATE is 2019/01/01 00:00:00 will be retrieved and if a date_time of 2019/01/01 04:00:00 or earlier is given then inputs where the SETTLEMENDATE is 2018/12/31 00:00:00 will be retrieved.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = InputsByDay(table_name='EXAMPLE', table_columns=['SETTLEMENTDATE', 'INITIALMW'], ... table_primary_keys=['SETTLEMENTDATE'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Normally you would use the add_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.
>>> data = pd.DataFrame({ ... 'SETTLEMENTDATE': ['2019/01/01 00:00:00', '2019/01/02 00:00:00'], ... 'INITIALMW': [1.0, 2.0]})
>>> data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)
When we call get_data the output is filtered by SETTLEMENTDATE and the results from the appropriate market day starting at 04:05:00 are retrieved. In the results below note when the output changes
>>> print(table.get_data(date_time='2019/01/01 12:00:00')) SETTLEMENTDATE INITIALMW 0 2019/01/01 00:00:00 1.0
>>> print(table.get_data(date_time='2019/01/02 04:00:00')) SETTLEMENTDATE INITIALMW 0 2019/01/01 00:00:00 1.0
>>> print(table.get_data(date_time='2019/01/02 04:05:00')) SETTLEMENTDATE INITIALMW 0 2019/01/02 00:00:00 2.0
>>> print(table.get_data(date_time='2019/01/02 12:00:00')) SETTLEMENTDATE INITIALMW 0 2019/01/02 00:00:00 2.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.
- Returns
- Return type
pd.DataFrame
-
add_data
(year, month)¶ “Download data for the given table and time, appends to any existing data.
Note
This method and its documentation is inherited from the _MultiDataSource class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MultiDataSource(table_name='DISPATCHLOAD', ... table_columns=['SETTLEMENTDATE', 'DUID', 'RAMPDOWNRATE', 'RAMPUPRATE'], ... table_primary_keys=['SETTLEMENTDATE', 'DUID'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Downloading data from http://nemweb.com.au/#mms-data-model into the table.
>>> table.add_data(year=2020, month=1)
Now the database should contain data for this table that is up to date as the end of Janurary.
>>> query = "Select * from DISPATCHLOAD order by SETTLEMENTDATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con)) SETTLEMENTDATE DUID RAMPDOWNRATE RAMPUPRATE 0 2020/02/01 00:00:00 YWPS4 180.0 180.0
If we subsequently add data from an earlier month the old data remains in the table, in addition to the new data.
>>> table.add_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con)) SETTLEMENTDATE DUID RAMPDOWNRATE RAMPUPRATE 0 2020/02/01 00:00:00 YWPS4 180.0 180.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
year (int) – The year to download data for.
month (int) – The month to download data for.
- Returns
- Return type
None
-
create_table_in_sqlite_db
()¶ Creates a table in the sqlite database that the object has a connection to.
Note
This method and its documentation is inherited from the _MMSTable class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'], ... con=con)
Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.
>>> table.create_table_in_sqlite_db()
Now a table exists in the database, but its empty.
>>> print(pd.read_sql("Select * from example", con=con)) Empty DataFrame Columns: [DUID, BIDTYPE] Index: []
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
-
-
class
nempy.historical_inputs.mms_db.
InputsStartAndEnd
(table_name, table_columns, table_primary_keys, con)¶ Manages retrieving dispatch inputs by START_DATE and END_DATE.
Methods:
Creates a table in the sqlite database that the object has a connection to.
get_data
(date_time)Retrieves data for the specified date_time by START_DATE and END_DATE.
set_data
(year, month)“Download data for the given table and time, replace any existing data.
-
get_data
(date_time)¶ Retrieves data for the specified date_time by START_DATE and END_DATE.
Records with a START_DATE before or equal to the date_times and an END_DATE after the date_time will be returned.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = InputsStartAndEnd(table_name='EXAMPLE', table_columns=['START_DATE', 'END_DATE', 'INITIALMW'], ... table_primary_keys=['START_DATE'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Normally you would use the add_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.
>>> data = pd.DataFrame({ ... 'START_DATE': ['2019/01/01 00:00:00', '2019/01/02 00:00:00'], ... 'END_DATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00'], ... 'INITIALMW': [1.0, 2.0]})
>>> data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)
When we call get_data the output is filtered by START_DATE and END_DATE.
>>> print(table.get_data(date_time='2019/01/01 00:00:00')) START_DATE END_DATE INITIALMW 0 2019/01/01 00:00:00 2019/01/02 00:00:00 1.0
>>> print(table.get_data(date_time='2019/01/01 12:00:00')) START_DATE END_DATE INITIALMW 0 2019/01/01 00:00:00 2019/01/02 00:00:00 1.0
>>> print(table.get_data(date_time='2019/01/02 00:00:00')) START_DATE END_DATE INITIALMW 0 2019/01/02 00:00:00 2019/01/03 00:00:00 2.0
>>> print(table.get_data(date_time='2019/01/02 00:12:00')) START_DATE END_DATE INITIALMW 0 2019/01/02 00:00:00 2019/01/03 00:00:00 2.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.
- Returns
- Return type
pd.DataFrame
-
create_table_in_sqlite_db
()¶ Creates a table in the sqlite database that the object has a connection to.
Note
This method and its documentation is inherited from the _MMSTable class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'], ... con=con)
Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.
>>> table.create_table_in_sqlite_db()
Now a table exists in the database, but its empty.
>>> print(pd.read_sql("Select * from example", con=con)) Empty DataFrame Columns: [DUID, BIDTYPE] Index: []
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
-
set_data
(year, month)¶ “Download data for the given table and time, replace any existing data.
Note
This method and its documentation is inherited from the _SingleDataSource class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY', ... table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'], ... table_primary_keys=['START_DATE', 'DUID'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Downloading data from http://nemweb.com.au/#mms-data-model into the table.
>>> table.set_data(year=2020, month=1)
Now the database should contain data for this table that is up to date as the end of Janurary.
>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 URANQ11 2020/02/04 00:00:00 NURQ1U NSW1
However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.
>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 WEMENSF1 2019/03/04 00:00:00 VWES2W VIC1
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
year (int) – The year to download data for.
month (int) – The month to download data for.
- Returns
- Return type
None
-
-
class
nempy.historical_inputs.mms_db.
InputsByMatchDispatchConstraints
(table_name, table_columns, table_primary_keys, con)¶ Manages retrieving dispatch inputs by matching against the DISPATCHCONSTRAINTS table
Methods:
Creates a table in the sqlite database that the object has a connection to.
get_data
(date_time)Retrieves data for the specified date_time by matching against the DISPATCHCONSTRAINT table.
set_data
(year, month)“Download data for the given table and time, replace any existing data.
-
get_data
(date_time)¶ Retrieves data for the specified date_time by matching against the DISPATCHCONSTRAINT table.
First the DISPATCHCONSTRAINT table is filtered by SETTLEMENTDATE and then the contents of the classes table is matched against that.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = InputsByMatchDispatchConstraints(table_name='EXAMPLE', ... table_columns=['GENCONID', 'EFFECTIVEDATE', 'VERSIONNO', 'RHS'], ... table_primary_keys=['GENCONID', 'EFFECTIVEDATE', 'VERSIONNO'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Normally you would use the set_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.
>>> data = pd.DataFrame({ ... 'GENCONID': ['X', 'X', 'Y', 'Y'], ... 'EFFECTIVEDATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00', '2019/01/01 00:00:00', ... '2019/01/03 00:00:00'], ... 'VERSIONNO': [1, 2, 2, 3], ... 'RHS': [1.0, 2.0, 2.0, 3.0]})
>>> data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)
>>> data = pd.DataFrame({ ... 'SETTLEMENTDATE' : ['2019/01/02 00:00:00', '2019/01/02 00:00:00', '2019/01/03 00:00:00', ... '2019/01/03 00:00:00'], ... 'CONSTRAINTID': ['X', 'Y', 'X', 'Y'], ... 'GENCONID_EFFECTIVEDATE': ['2019/01/02 00:00:00', '2019/01/01 00:00:00', '2019/01/03 00:00:00', ... '2019/01/03 00:00:00'], ... 'GENCONID_VERSIONNO': [1, 2, 2, 3]})
>>> data.to_sql('DISPATCHCONSTRAINT', con=con, if_exists='append', index=False)
When we call get_data the output is filtered by the contents of DISPATCHCONSTRAINT.
>>> print(table.get_data(date_time='2019/01/02 00:00:00')) GENCONID EFFECTIVEDATE VERSIONNO RHS 0 X 2019/01/02 00:00:00 1 1.0 1 Y 2019/01/01 00:00:00 2 2.0
>>> print(table.get_data(date_time='2019/01/03 00:00:00')) GENCONID EFFECTIVEDATE VERSIONNO RHS 0 X 2019/01/03 00:00:00 2 2.0 1 Y 2019/01/03 00:00:00 3 3.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.
- Returns
- Return type
pd.DataFrame
-
create_table_in_sqlite_db
()¶ Creates a table in the sqlite database that the object has a connection to.
Note
This method and its documentation is inherited from the _MMSTable class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'], ... con=con)
Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.
>>> table.create_table_in_sqlite_db()
Now a table exists in the database, but its empty.
>>> print(pd.read_sql("Select * from example", con=con)) Empty DataFrame Columns: [DUID, BIDTYPE] Index: []
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
-
set_data
(year, month)¶ “Download data for the given table and time, replace any existing data.
Note
This method and its documentation is inherited from the _SingleDataSource class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY', ... table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'], ... table_primary_keys=['START_DATE', 'DUID'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Downloading data from http://nemweb.com.au/#mms-data-model into the table.
>>> table.set_data(year=2020, month=1)
Now the database should contain data for this table that is up to date as the end of Janurary.
>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 URANQ11 2020/02/04 00:00:00 NURQ1U NSW1
However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.
>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 WEMENSF1 2019/03/04 00:00:00 VWES2W VIC1
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
year (int) – The year to download data for.
month (int) – The month to download data for.
- Returns
- Return type
None
-
-
class
nempy.historical_inputs.mms_db.
InputsByEffectiveDateVersionNoAndDispatchInterconnector
(table_name, table_columns, table_primary_keys, con)¶ Manages retrieving dispatch inputs by EFFECTTIVEDATE and VERSIONNO.
Methods:
Creates a table in the sqlite database that the object has a connection to.
get_data
(date_time)Retrieves data for the specified date_time by EFFECTTIVEDATE and VERSIONNO.
set_data
(year, month)“Download data for the given table and time, replace any existing data.
-
get_data
(date_time)¶ Retrieves data for the specified date_time by EFFECTTIVEDATE and VERSIONNO.
For each unique record (by the remaining primary keys, not including EFFECTTIVEDATE and VERSIONNO) the record with the most recent EFFECTIVEDATE
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical_inputs.db')
Create the table object.
>>> table = InputsByEffectiveDateVersionNoAndDispatchInterconnector(table_name='EXAMPLE', ... table_columns=['INTERCONNECTORID', 'EFFECTIVEDATE', 'VERSIONNO', 'INITIALMW'], ... table_primary_keys=['INTERCONNECTORID', 'EFFECTIVEDATE', 'VERSIONNO'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Normally you would use the set_data method to add historical_inputs data, but here we will add data directly to the database so some simple example data can be added.
>>> data = pd.DataFrame({ ... 'INTERCONNECTORID': ['X', 'X', 'Y', 'Y'], ... 'EFFECTIVEDATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00', '2019/01/01 00:00:00', ... '2019/01/03 00:00:00'], ... 'VERSIONNO': [1, 2, 2, 3], ... 'INITIALMW': [1.0, 2.0, 2.0, 3.0]})
>>> data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)
We also need to add data to DISPATCHINTERCONNECTORRES because the results of the get_data method are filtered against this table
>>> data = pd.DataFrame({ ... 'INTERCONNECTORID': ['X', 'X', 'Y'], ... 'SETTLEMENTDATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00', '2019/01/02 00:00:00']})
>>> data.to_sql('DISPATCHINTERCONNECTORRES', con=con, if_exists='append', index=False)
When we call get_data the output is filtered by the contents of DISPATCHCONSTRAINT.
>>> print(table.get_data(date_time='2019/01/02 00:00:00')) INTERCONNECTORID EFFECTIVEDATE VERSIONNO INITIALMW 0 X 2019/01/02 00:00:00 1 1.0 1 Y 2019/01/01 00:00:00 2 2.0
In the next interval interconnector Y is not present in DISPATCHINTERCONNECTORRES.
>>> print(table.get_data(date_time='2019/01/03 00:00:00')) INTERCONNECTORID EFFECTIVEDATE VERSIONNO INITIALMW 0 X 2019/01/03 00:00:00 2 2.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical_inputs.db')
- Parameters
date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.
- Returns
- Return type
pd.DataFrame
-
create_table_in_sqlite_db
()¶ Creates a table in the sqlite database that the object has a connection to.
Note
This method and its documentation is inherited from the _MMSTable class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'], ... con=con)
Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.
>>> table.create_table_in_sqlite_db()
Now a table exists in the database, but its empty.
>>> print(pd.read_sql("Select * from example", con=con)) Empty DataFrame Columns: [DUID, BIDTYPE] Index: []
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
-
set_data
(year, month)¶ “Download data for the given table and time, replace any existing data.
Note
This method and its documentation is inherited from the _SingleDataSource class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY', ... table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'], ... table_primary_keys=['START_DATE', 'DUID'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Downloading data from http://nemweb.com.au/#mms-data-model into the table.
>>> table.set_data(year=2020, month=1)
Now the database should contain data for this table that is up to date as the end of Janurary.
>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 URANQ11 2020/02/04 00:00:00 NURQ1U NSW1
However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.
>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 WEMENSF1 2019/03/04 00:00:00 VWES2W VIC1
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
year (int) – The year to download data for.
month (int) – The month to download data for.
- Returns
- Return type
None
-
-
class
nempy.historical_inputs.mms_db.
InputsByEffectiveDateVersionNo
(table_name, table_columns, table_primary_keys, con)¶ Manages retrieving dispatch inputs by EFFECTTIVEDATE and VERSIONNO.
Methods:
Creates a table in the sqlite database that the object has a connection to.
get_data
(date_time)Retrieves data for the specified date_time by EFFECTTIVEDATE and VERSIONNO.
set_data
(year, month)“Download data for the given table and time, replace any existing data.
-
get_data
(date_time)¶ Retrieves data for the specified date_time by EFFECTTIVEDATE and VERSIONNO.
For each unique record (by the remaining primary keys, not including EFFECTTIVEDATE and VERSIONNO) the record with the most recent EFFECTIVEDATE
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = InputsByEffectiveDateVersionNo(table_name='EXAMPLE', ... table_columns=['DUID', 'EFFECTIVEDATE', 'VERSIONNO', 'INITIALMW'], ... table_primary_keys=['DUID', 'EFFECTIVEDATE', 'VERSIONNO'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Normally you would use the set_data method to add historical data, but here we will add data directly to the database so some simple example data can be added.
>>> data = pd.DataFrame({ ... 'DUID': ['X', 'X', 'Y', 'Y'], ... 'EFFECTIVEDATE': ['2019/01/02 00:00:00', '2019/01/03 00:00:00', '2019/01/01 00:00:00', ... '2019/01/03 00:00:00'], ... 'VERSIONNO': [1, 2, 2, 3], ... 'INITIALMW': [1.0, 2.0, 2.0, 3.0]})
>>> data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)
When we call get_data the output is filtered by most recent effective date and highest version no.
>>> print(table.get_data(date_time='2019/01/02 00:00:00')) DUID EFFECTIVEDATE VERSIONNO INITIALMW 0 X 2019/01/02 00:00:00 1 1.0 1 Y 2019/01/01 00:00:00 2 2.0
In the next interval interconnector Y is not present in DISPATCHINTERCONNECTORRES.
>>> print(table.get_data(date_time='2019/01/03 00:00:00')) DUID EFFECTIVEDATE VERSIONNO INITIALMW 0 X 2019/01/03 00:00:00 2 2.0 1 Y 2019/01/03 00:00:00 3 3.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
date_time (str) – Should be of format ‘%Y/%m/%d %H:%M:%S’, and always a round 5 min interval e.g. 2019/01/01 11:55:00.
- Returns
- Return type
pd.DataFrame
-
create_table_in_sqlite_db
()¶ Creates a table in the sqlite database that the object has a connection to.
Note
This method and its documentation is inherited from the _MMSTable class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'], ... con=con)
Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.
>>> table.create_table_in_sqlite_db()
Now a table exists in the database, but its empty.
>>> print(pd.read_sql("Select * from example", con=con)) Empty DataFrame Columns: [DUID, BIDTYPE] Index: []
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
-
set_data
(year, month)¶ “Download data for the given table and time, replace any existing data.
Note
This method and its documentation is inherited from the _SingleDataSource class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY', ... table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'], ... table_primary_keys=['START_DATE', 'DUID'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Downloading data from http://nemweb.com.au/#mms-data-model into the table.
>>> table.set_data(year=2020, month=1)
Now the database should contain data for this table that is up to date as the end of Janurary.
>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 URANQ11 2020/02/04 00:00:00 NURQ1U NSW1
However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.
>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 WEMENSF1 2019/03/04 00:00:00 VWES2W VIC1
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
year (int) – The year to download data for.
month (int) – The month to download data for.
- Returns
- Return type
None
-
-
class
nempy.historical_inputs.mms_db.
InputsNoFilter
(table_name, table_columns, table_primary_keys, con)¶ Manages retrieving dispatch inputs where no filter is require.
Methods:
Creates a table in the sqlite database that the object has a connection to.
get_data
()Retrieves all data in the table.
set_data
(year, month)“Download data for the given table and time, replace any existing data.
-
get_data
()¶ Retrieves all data in the table.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical_inputs.db')
Create the table object.
>>> table = InputsNoFilter(table_name='EXAMPLE', table_columns=['DUID', 'INITIALMW'], ... table_primary_keys=['DUID'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Normally you would use the set_data method to add historical_inputs data, but here we will add data directly to the database so some simple example data can be added.
>>> data = pd.DataFrame({ ... 'DUID': ['X', 'Y'], ... 'INITIALMW': [1.0, 2.0]})
>>> data.to_sql('EXAMPLE', con=con, if_exists='append', index=False)
When we call get_data all data in the table is returned.
>>> print(table.get_data()) DUID INITIALMW 0 X 1.0 1 Y 2.0
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical_inputs.db')
- Returns
- Return type
pd.DataFrame
-
create_table_in_sqlite_db
()¶ Creates a table in the sqlite database that the object has a connection to.
Note
This method and its documentation is inherited from the _MMSTable class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _MMSTable(table_name='EXAMPLE', table_columns=['DUID', 'BIDTYPE'], table_primary_keys=['DUID'], ... con=con)
Create the corresponding table in the sqlite database, note this step many not be needed if you have connected to an existing database.
>>> table.create_table_in_sqlite_db()
Now a table exists in the database, but its empty.
>>> print(pd.read_sql("Select * from example", con=con)) Empty DataFrame Columns: [DUID, BIDTYPE] Index: []
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
-
set_data
(year, month)¶ “Download data for the given table and time, replace any existing data.
Note
This method and its documentation is inherited from the _SingleDataSource class.
Examples
>>> import sqlite3 >>> import os
Set up a database or connect to an existing one.
>>> con = sqlite3.connect('historical.db')
Create the table object.
>>> table = _SingleDataSource(table_name='DUDETAILSUMMARY', ... table_columns=['DUID', 'START_DATE', 'CONNECTIONPOINTID', 'REGIONID'], ... table_primary_keys=['START_DATE', 'DUID'], con=con)
Create the table in the database.
>>> table.create_table_in_sqlite_db()
Downloading data from http://nemweb.com.au/#mms-data-model into the table.
>>> table.set_data(year=2020, month=1)
Now the database should contain data for this table that is up to date as the end of Janurary.
>>> query = "Select * from DUDETAILSUMMARY order by START_DATE DESC limit 1;"
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 URANQ11 2020/02/04 00:00:00 NURQ1U NSW1
However if we subsequently set data from a previous date then any existing data will be replaced. Note the change in the most recent record in the data set below.
>>> table.set_data(year=2019, month=1)
>>> print(pd.read_sql_query(query, con=con)) DUID START_DATE CONNECTIONPOINTID REGIONID 0 WEMENSF1 2019/03/04 00:00:00 VWES2W VIC1
Clean up by closing the database and deleting if its no longer needed.
>>> con.close() >>> os.remove('historical.db')
- Parameters
year (int) – The year to download data for.
month (int) – The month to download data for.
- Returns
- Return type
None
-
loaders¶
Classes:
|
Provides single interface for accessing raw historical inputs. |
-
class
nempy.historical_inputs.loaders.
RawInputsLoader
(nemde_xml_cache_manager, market_management_system_database)¶ Provides single interface for accessing raw historical inputs.
Examples
>>> import sqlite3
>>> from nempy.historical_inputs import mms_db >>> from nempy.historical_inputs import xml_cache
For the RawInputsLoader to work we need to construct a database and inputs cache for it to load inputs from and then pass the interfaces to these to the inputs loader.
>>> con = sqlite3.connect('market_management_system.db') >>> mms_db_manager = mms_db.DBManager(connection=con) >>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
In this example the database and cache have already been populated so the input loader can be created straight away.
>>> inputs_loader = RawInputsLoader(xml_cache_manager, mms_db_manager)
Then we set the dispatch interval that we want to load inputs from.
>>> inputs_loader.set_interval('2019/01/01 00:00:00')
And then we can load some inputs.
>>> inputs_loader.get_unit_volume_bids() DUID BIDTYPE MAXAVAIL ENABLEMENTMIN ENABLEMENTMAX LOWBREAKPOINT HIGHBREAKPOINT BANDAVAIL1 BANDAVAIL2 BANDAVAIL3 BANDAVAIL4 BANDAVAIL5 BANDAVAIL6 BANDAVAIL7 BANDAVAIL8 BANDAVAIL9 BANDAVAIL10 RAMPDOWNRATE RAMPUPRATE 0 AGLHAL ENERGY 173.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 60.0 0.0 0.0 160.0 720.0 720.0 1 AGLSOM ENERGY 160.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 170.0 480.0 480.0 2 ANGAST1 ENERGY 43.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 50.0 0.0 0.0 0.0 50.0 840.0 840.0 3 APD01 LOWER5MIN 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 300.0 0.0 0.0 4 APD01 LOWER60SEC 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 300.0 0.0 0.0 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 1021 YWPS4 LOWER6SEC 25.0 250.0 385.0 275.0 385.0 15.0 10.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1022 YWPS4 RAISE5MIN 0.0 250.0 390.0 250.0 380.0 0.0 0.0 0.0 0.0 5.0 0.0 0.0 5.0 0.0 10.0 0.0 0.0 1023 YWPS4 RAISEREG 15.0 250.0 385.0 250.0 370.0 0.0 0.0 0.0 0.0 0.0 0.0 5.0 10.0 0.0 5.0 0.0 0.0 1024 YWPS4 RAISE60SEC 10.0 220.0 400.0 220.0 390.0 0.0 0.0 0.0 0.0 0.0 5.0 5.0 0.0 0.0 10.0 0.0 0.0 1025 YWPS4 RAISE6SEC 15.0 220.0 405.0 220.0 390.0 0.0 0.0 0.0 10.0 5.0 0.0 0.0 0.0 0.0 10.0 0.0 0.0 [1026 rows x 19 columns]
Methods:
-
set_interval
(interval)¶ Set the interval to load inputs for.
Examples
For an example see the
class level documentation
- Parameters
interval (str) – In the format ‘%Y/%m/%d %H:%M:%S’
-
get_unit_initial_conditions
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_unit_initial_conditions
-
get_unit_volume_bids
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_unit_volume_bids
-
get_unit_price_bids
()¶ Direct interface to
nempy.historical_inputs.mms_db.DBManager.BIDDAYOFFER_D.get_data
-
get_unit_details
()¶ Direct interface to
nempy.historical_inputs.mms_db.DBManager.DUDETAILSUMMARY.get_data
-
get_agc_enablement_limits
()¶ Direct interface to
nempy.historical_inputs.mms_db.DBManager.DISPATCHLOAD.get_data
-
get_UIGF_values
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_UIGF_values
-
get_violations
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_violations
-
get_constraint_violation_prices
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_violation_prices
-
get_constraint_rhs
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_rhs
-
get_constraint_type
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_type
-
get_constraint_region_lhs
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_region_lhs
-
get_constraint_unit_lhs
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_unit_lhs
-
get_constraint_interconnector_lhs
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_constraint_interconnector_lhs
-
get_market_interconnectors
()¶ Direct interface to
nempy.historical_inputs.mms_db.DBManager.MNSP_INTERCONNECTOR.get_data
-
get_market_interconnector_link_bid_availability
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_market_interconnector_link_bid_availability
-
get_interconnector_constraint_parameters
()¶ Direct interface to
nempy.historical_inputs.mms_db.DBManager.INTERCONNECTORCONSTRAINT.get_data
-
get_interconnector_definitions
()¶ Direct interface to
nempy.historical_inputs.mms_db.DBManager.INTERCONNECTOR.get_data
-
get_regional_loads
()¶ Direct interface to
nempy.historical_inputs.mms_db.DBManager.DISPATCHREGIONSUM.get_data
-
get_interconnector_loss_segments
()¶ Direct interface to
nempy.historical_inputs.mms_db.DBManager.LOSSMODEL.get_data
-
get_interconnector_loss_parameters
()¶ Direct interface to
nempy.historical_inputs.mms_db.DBManager.LOSSFACTORMODEL.get_data
-
get_unit_fast_start_parameters
()¶ Direct interface to
nempy.historical_inputs.xml_cache.XMLCacheManager.get_unit_fast_start_parameters
-
is_over_constrained_dispatch_rerun
()¶ Checks if the over constrained dispatch rerun process was used by AEMO to dispatch this interval.
Examples
>>> import sqlite3
>>> from nempy.historical_inputs import mms_db >>> from nempy.historical_inputs import xml_cache
For the RawInputsLoader to work we need to construct a database and inputs cache for it to load inputs from and then pass the interfaces to these to the inputs loader.
>>> con = sqlite3.connect('market_management_system.db') >>> mms_db_manager = mms_db.DBManager(connection=con) >>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache')
In this example the database and cache have already been populated so the input loader can be created straight away.
>>> inputs_loader = RawInputsLoader(xml_cache_manager, mms_db_manager)
Then we set the dispatch interval that we want to load inputs from.
>>> inputs_loader.set_interval('2019/01/01 00:00:00')
And then we can load some inputs.
>>> inputs_loader.is_over_constrained_dispatch_rerun() False
- Returns
- Return type
bool
-
units¶
Exceptions:
Raise for calling methods in incompatible order. |
Classes:
|
Loads unit related raw inputs and preprocess them for compatibility with |
-
exception
nempy.historical_inputs.units.
MethodCallOrderError
¶ Raise for calling methods in incompatible order.
-
class
nempy.historical_inputs.units.
UnitData
(raw_input_loader)¶ Loads unit related raw inputs and preprocess them for compatibility with
nempy.markets.SpotMarket
Examples
This example shows the setup used for the examples in the class methods.
>>> import sqlite3 >>> from nempy.historical_inputs import mms_db >>> from nempy.historical_inputs import xml_cache >>> from nempy.historical_inputs import loaders
The UnitData class requries a RawInputsLoader instance.
>>> con = sqlite3.connect('market_management_system.db') >>> mms_db_manager = mms_db.DBManager(connection=con) >>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache') >>> inputs_loader = loaders.RawInputsLoader(xml_cache_manager, mms_db_manager) >>> inputs_loader.set_interval('2019/01/01 00:00:00')
Create the UnitData instance.
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_unit_bid_availability() unit capacity 0 AGLHAL 173.0 1 AGLSOM 160.0 2 ANGAST1 43.0 23 BALBG1 4.0 33 BALBL1 0.0 ... ... ... 989 YARWUN_1 140.0 990 YWPS1 370.0 999 YWPS2 380.0 1008 YWPS3 0.0 1017 YWPS4 370.0 [218 rows x 2 columns]
Methods:
Load the fcas trapezium constraints into the UnitData class so subsequent method calls can access them.
Get ramp rates used as bid by units.
Get the unit bid FCAS trapeziums for contingency services.
Get the parameters needed to construct the fast dispatch inflexibility profiles used for dispatch.
Get the unit bid maximum availability of each service.
Get the unit bid FCAS trapeziums for regulation services.
Get unit outputs at the start of the dispatch interval.
Get processed unit bids.
Get ramp rates used for constraining energy dispatch.
Get the scada ramp down rates for unit with a lower regulation bid.
Get the scada ramp up rates for unit with a raise regulation bid.
Get the bid in maximum availability for scheduled units.
Get unit information.
Get the maximum availability predicted by the unconstrained intermittent generation forecast.
-
get_unit_bid_availability
()¶ Get the bid in maximum availability for scheduled units.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_unit_bid_availability() unit capacity 0 AGLHAL 173.0 1 AGLSOM 160.0 2 ANGAST1 43.0 23 BALBG1 4.0 33 BALBL1 0.0 ... ... ... 989 YARWUN_1 140.0 990 YWPS1 370.0 999 YWPS2 380.0 1008 YWPS3 0.0 1017 YWPS4 370.0 [218 rows x 2 columns]
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
capacity
unit bid in max availability, in MW,
(as str)
- Return type
pd.DataFrame
-
get_unit_uigf_limits
()¶ Get the maximum availability predicted by the unconstrained intermittent generation forecast.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_unit_uigf_limits() unit capacity 0 ARWF1 56.755 1 BALDHWF1 9.160 2 BANN1 0.000 3 BLUFF1 4.833 4 BNGSF1 0.000 .. ... ... 57 WGWF1 25.445 58 WHITSF1 0.000 59 WOODLWN1 0.075 60 WRSF1 0.000 61 WRWF1 15.760 [62 rows x 2 columns]
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
capacity
the forecast max availability, in MW,
(as str)
- Return type
pd.DataFrame
-
get_ramp_rates_used_for_energy_dispatch
()¶ Get ramp rates used for constraining energy dispatch.
The minimum of bid in ramp rates and scada telemetered ramp rates are used. If a unit is ending the interval in fast start dispatch mode two then ramp rates are not returned for that unit as its ramp is strictly set by the dispatch inflexibility profile. If a unit starts in mode two, but ends the interval past mode two, then its ramp up rate is adjusted to reflect the fact it was constrained by the inflexibility profile for part of the interval.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_ramp_rates_used_for_energy_dispatch() unit initial_output ramp_up_rate ramp_down_rate 0 AGLHAL 0.000000 720.000000 720.000000 1 AGLSOM 0.000000 480.000000 480.000000 2 ANGAST1 0.000000 840.000000 840.000000 3 ARWF1 54.500000 1200.000000 600.000000 4 BALBG1 3.810000 6000.000000 6000.000000 .. ... ... ... ... 275 YARWUN_1 140.360001 0.000000 0.000000 276 YWPS1 366.665833 177.750006 177.750006 277 YWPS2 374.686066 180.000000 180.000000 278 YWPS3 0.000000 180.000000 180.000000 279 YWPS4 368.139252 180.000000 180.000000 [280 rows x 4 columns]
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
initial_output
the output/consumption of the unit at
the start of the dispatch interval,
in MW, (as np.float64)
ramp_up_rate
the ramp up rate, in MW/h,
(as np.float64)
ramp_down_rate
the ramp down rate, in MW/h,
(as np.float64)
- Return type
pd.DataFrame
-
get_as_bid_ramp_rates
()¶ Get ramp rates used as bid by units.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_as_bid_ramp_rates() unit ramp_up_rate ramp_down_rate 0 AGLHAL 720.0 720.0 1 AGLSOM 480.0 480.0 2 ANGAST1 840.0 840.0 9 ARWF1 1200.0 600.0 23 BALBG1 6000.0 6000.0 ... ... ... ... 989 YARWUN_1 0.0 0.0 990 YWPS1 180.0 180.0 999 YWPS2 180.0 180.0 1008 YWPS3 180.0 180.0 1017 YWPS4 180.0 180.0 [280 rows x 3 columns]
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
ramp_up_rate
the ramp up rate, in MW/h,
(as np.float64)
ramp_down_rate
the ramp down rate, in MW/h,
(as np.float64)
- Return type
pd.DataFrame
-
get_initial_unit_output
()¶ Get unit outputs at the start of the dispatch interval.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_initial_unit_output() unit initial_output 0 AGLHAL 0.000000 1 AGLSOM 0.000000 2 ANGAST1 0.000000 3 APD01 0.000000 4 ARWF1 54.500000 .. ... ... 283 YARWUN_1 140.360001 284 YWPS1 366.665833 285 YWPS2 374.686066 286 YWPS3 0.000000 287 YWPS4 368.139252 [288 rows x 2 columns]
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
initial_output
the output/consumption of the unit at
the start of the dispatch interval,
in MW, (as np.float64)
- Return type
pd.DataFrame
-
get_fast_start_profiles_for_dispatch
(unconstrained_dispatch=None)¶ Get the parameters needed to construct the fast dispatch inflexibility profiles used for dispatch.
If the results of an non fast start constrained dispatch run are provided then these are used to commit fast start units starting the interval in mode zero, when the they have a non-zero dispatch result.
For more info on fast start dispatch inflexibility profiles
see AEMO docs
.Examples
>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_fast_start_profiles_for_dispatch() unit end_mode time_in_end_mode mode_two_length mode_four_length min_loading 0 AGLHAL 0 5.0 3.0 2.0 2.0 1 AGLSOM 0 5.0 2.0 2.0 16.0 2 BARCALDN 0 5.0 4.0 4.0 12.0 3 BARRON-1 5 5.0 3.0 1.0 5.0 4 BARRON-2 5 5.0 3.0 1.0 5.0 .. ... ... ... ... ... ... 69 VPGS5 0 5.0 3.0 0.0 48.0 70 VPGS6 0 5.0 3.0 0.0 48.0 71 W/HOE#1 0 5.0 0.0 0.0 160.0 72 W/HOE#2 0 5.0 0.0 0.0 160.0 73 YABULU 0 5.0 6.0 6.0 83.0 [74 rows x 6 columns]
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
end_mode
the fast start mode the unit will end
the dispatch interval in, (as np.int64)
time_in_end_mode
the amount of time the unit will have
spend in the end mode at the end of the
dispatch interval, (as np.float64)
mode_two_length
the length the units mode two, in minutes
(as np.float64)
mode_four_length
the length the units mode four, in minutes
(as np.float64)
min_loading
the mininum opperating level of the unit
during mode three, in MW, (as no.float64)
- Return type
pd.DataFrame
-
get_unit_info
()¶ Get unit information.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> unit_data.get_unit_info() unit region dispatch_type loss_factor 0 AGLHAL SA1 generator 0.971500 1 AGLNOW1 NSW1 generator 1.003700 2 AGLSITA1 NSW1 generator 1.002400 3 AGLSOM VIC1 generator 0.984743 4 ANGAST1 SA1 generator 1.005674 .. ... ... ... ... 477 YWNL1 VIC1 generator 0.957300 478 YWPS1 VIC1 generator 0.969600 479 YWPS2 VIC1 generator 0.957300 480 YWPS3 VIC1 generator 0.957300 481 YWPS4 VIC1 generator 0.957300 [482 rows x 4 columns]
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
region
the market region in which the unit is
located, (as str)
dispatch_type
whether the unit is a ‘generator’ or
’load’, (as str)
loss_factor
the combined unit transmission and
distribution loss_factor, (as np.float64)
- Return type
pd.DataFrame
-
get_processed_bids
()¶ Get processed unit bids.
The bids are processed by scaling for AGC enablement limits, scaling for scada ramp rates, scaling for the unconstrained intermittent generation forecast and enforcing the preconditions for enabling FCAS bids. For more info on these processes
see AEMO docs
.Examples
>>> inputs_loader = _test_setup()
>>> unit_data = UnitData(inputs_loader)
>>> volume_bids, price_bids = unit_data.get_processed_bids()
>>> volume_bids unit service 1 2 3 4 5 6 7 8 9 10 0 AGLHAL energy 0.0 0.0 0.0 0.0 0.0 0.0 60.0 0.0 0.0 160.0 1 AGLSOM energy 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 170.0 2 ANGAST1 energy 0.0 0.0 0.0 0.0 0.0 50.0 0.0 0.0 0.0 50.0 9 ARWF1 energy 0.0 241.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 23 BALBG1 energy 4.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 26.0 .. ... ... ... ... ... ... ... ... ... ... ... ... 340 YWPS4 lower_6s 15.0 10.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 341 YWPS4 raise_60s 0.0 0.0 0.0 0.0 0.0 5.0 5.0 0.0 0.0 10.0 342 YWPS4 raise_6s 0.0 0.0 0.0 10.0 5.0 0.0 0.0 0.0 0.0 10.0 343 YWPS4 lower_reg 0.0 0.0 0.0 0.0 0.0 0.0 0.0 20.0 0.0 0.0 344 YWPS4 raise_reg 0.0 0.0 0.0 0.0 0.0 0.0 5.0 10.0 0.0 5.0 [569 rows x 12 columns]
>>> price_bids unit service 1 2 3 4 5 6 7 8 9 10 0 AGLHAL energy -971.50 0.00 270.86 358.30 406.87 484.59 562.31 1326.64 10277.37 13600.02 1 AGLSOM energy -984.74 0.00 83.70 108.32 142.79 279.67 444.12 985.73 13097.94 14278.73 2 ANGAST1 energy -1005.67 0.00 125.71 201.34 300.89 382.14 593.34 1382.65 10678.25 14582.27 9 ARWF1 energy -969.10 -63.00 2.00 4.00 8.00 16.00 32.00 64.00 128.00 14051.95 23 BALBG1 energy -994.80 0.00 19.92 47.37 75.18 109.45 298.44 443.13 10047.49 14424.60 .. ... ... ... ... ... ... ... ... ... ... ... ... 284 YWPS4 lower_6s 0.03 0.05 0.16 0.30 1.90 25.04 30.04 99.00 4600.00 9899.00 285 YWPS4 lower_reg 0.05 1.90 4.78 9.40 14.00 29.00 64.90 240.90 11990.00 13050.00 286 YWPS4 raise_60s 0.17 1.80 4.80 10.01 21.00 39.00 52.00 102.00 4400.00 11999.00 287 YWPS4 raise_6s 0.48 1.75 4.90 20.70 33.33 99.90 630.00 1999.00 6000.00 12299.00 288 YWPS4 raise_reg 0.05 2.70 9.99 19.99 49.00 95.50 240.00 450.50 950.50 11900.00 [569 rows x 12 columns]
Multiple Returns
volume_bids : pd.DataFrame
Columns:
Description:
unit
unique identifier for units, (as str)
service
the service the bid applies to, (as str)
1
the volume bid the first bid band, in MW,
(as np.float64)
:
10
the volume in the tenth bid band, in MW,
(as np.float64)
price_bids : pd.DataFrame
Columns:
Description:
unit
unique identifier for units, (as str)
service
the service the bid applies to, (as str)
1
the price of the first bid band, in MW,
(as np.float64)
:
10
the price of the the tenth bid band, in MW,
(as np.float64)
-
add_fcas_trapezium_constraints
()¶ Load the fcas trapezium constraints into the UnitData class so subsequent method calls can access them.
Examples
>>> inputs_loader = _test_setup() >>> unit_data = UnitData(inputs_loader)
If we try and call add_fcas_trapezium_constraints before calling get_processed_bids we get an error.
>>> unit_data.add_fcas_trapezium_constraints() Traceback (most recent call last): ... units.MethodCallOrderError: Call get_processed_bids before add_fcas_trapezium_constraints.
After calling get_processed_bids it goes away.
>>> volume_bids, price_bids = unit_data.get_processed_bids()
>>> unit_data.add_fcas_trapezium_constraints()
If we try and access the trapezium constraints before calling this method we get an error.
>>> inputs_loader = _test_setup() >>> unit_data = UnitData(inputs_loader) >>> unit_data.get_fcas_max_availability() Traceback (most recent call last): ... units.MethodCallOrderError: Call add_fcas_trapezium_constraints before get_fcas_max_availability.
After calling it the error goes away.
>>> volume_bids, price_bids = unit_data.get_processed_bids() >>> unit_data.add_fcas_trapezium_constraints()
>>> unit_data.get_fcas_max_availability() unit service max_availability 0 APD01 raise_5min 34.0000 1 APD01 raise_60s 34.0000 2 APD01 raise_6s 17.0000 3 ASNENC1 raise_5min 72.0000 4 ASNENC1 raise_60s 49.0000 .. ... ... ... 340 YWPS4 lower_6s 25.0000 341 YWPS4 raise_60s 10.0000 342 YWPS4 raise_6s 15.0000 343 YWPS4 lower_reg 15.1875 344 YWPS4 raise_reg 15.0000 [289 rows x 3 columns]
- Returns
- Return type
None
- Raises
MethodCallOrderError – if called before get_processed_bids
-
get_fcas_max_availability
()¶ Get the unit bid maximum availability of each service.
Examples
>>> inputs_loader = _test_setup() >>> unit_data = UnitData(inputs_loader)
Required calls before calling get_fcas_max_availability.
>>> volume_bids, price_bids = unit_data.get_processed_bids() >>> unit_data.add_fcas_trapezium_constraints()
Now facs max availibility can be accessed.
>>> unit_data.get_fcas_max_availability() unit service max_availability 0 APD01 raise_5min 34.0000 1 APD01 raise_60s 34.0000 2 APD01 raise_6s 17.0000 3 ASNENC1 raise_5min 72.0000 4 ASNENC1 raise_60s 49.0000 .. ... ... ... 340 YWPS4 lower_6s 25.0000 341 YWPS4 raise_60s 10.0000 342 YWPS4 raise_6s 15.0000 343 YWPS4 lower_reg 15.1875 344 YWPS4 raise_reg 15.0000 [289 rows x 3 columns]
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
service
the service the bid applies to, (as str)
max_availability
the unit bid maximum availability, in MW,
(as np.float64)
- Return type
pd.DataFrame
- Raises
MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.
-
get_fcas_regulation_trapeziums
()¶ Get the unit bid FCAS trapeziums for regulation services.
Examples
>>> inputs_loader = _test_setup() >>> unit_data = UnitData(inputs_loader)
Required calls before calling get_fcas_regulation_trapeziums.
>>> volume_bids, price_bids = unit_data.get_processed_bids() >>> unit_data.add_fcas_trapezium_constraints()
Now facs max availibility can be accessed.
>>> unit_data.get_fcas_regulation_trapeziums() unit service max_availability enablement_min low_break_point high_break_point enablement_max 19 BW01 lower_reg 35.015640 280.01013 315.025770 445.010250 445.01025 20 BW01 raise_reg 35.015640 280.01013 280.010130 409.994610 445.01025 27 BW02 lower_reg 28.390656 297.05151 325.442166 633.239010 633.23901 28 BW02 raise_reg 28.390656 297.05151 297.051510 604.848354 633.23901 35 CALL_B_1 lower_reg 15.000000 192.00000 207.000000 354.100010 354.10001 .. ... ... ... ... ... ... ... 295 TUNGATIN raise_reg 77.000000 3.00003 3.000030 1.000000 78.00000 311 VP6 lower_reg 30.125000 250.00000 280.125000 660.000000 660.00000 312 VP6 raise_reg 30.125000 250.00000 250.000000 629.875000 660.00000 343 YWPS4 lower_reg 15.187500 250.00000 265.187500 370.687500 370.68750 344 YWPS4 raise_reg 15.000000 250.00000 250.000000 355.687500 370.68750 [64 rows x 7 columns]
- Returns
Columns:
Description:
unit
unique identifier of a dispatch unit,
(as str)
service
the regulation service being offered,
(as str)
max_availability
the maximum volume of the contingency
service, in MW, (as np.float64)
enablement_min
the energy dispatch level at which
the unit can begin to provide
the regulation service, in MW,
(as np.float64)
low_break_point
the energy dispatch level at which
the unit can provide the full
regulation service offered, in MW,
(as np.float64)
high_break_point
the energy dispatch level at which the
unit can no longer provide the
full regulation service offered, in MW,
(as np.float64)
enablement_max
the energy dispatch level at which the
unit can no longer provide any
regulation service, in MW,
(as np.float64)
- Return type
pd.DataFrame
- Raises
MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.
-
get_scada_ramp_down_rates_of_lower_reg_units
()¶ Get the scada ramp down rates for unit with a lower regulation bid.
Only units with scada ramp rates and a lower regulation bid that passes enablement criteria are returned.
Examples
>>> inputs_loader = _test_setup() >>> unit_data = UnitData(inputs_loader)
Required calls before calling get_scada_ramp_down_rates_of_lower_reg_units.
>>> volume_bids, price_bids = unit_data.get_processed_bids() >>> unit_data.add_fcas_trapezium_constraints()
Now the method can be called.
>>> unit_data.get_scada_ramp_down_rates_of_lower_reg_units().head() unit initial_output ramp_down_rate 36 BW01 433.425049 420.187683 37 BW02 532.075073 340.687866 40 CALL_B_1 344.100006 240.000000 41 CALL_B_2 349.100006 240.000000 74 ER01 542.699951 298.875275
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
initial_output
the output/consumption of the unit at
the start of the dispatch interval,
in MW, (as np.float64)
ramp_down_rate
the ramp down rate, in MW/h,
(as np.float64)
- Return type
pd.DataFrame
- Raises
MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.
-
get_scada_ramp_up_rates_of_raise_reg_units
()¶ Get the scada ramp up rates for unit with a raise regulation bid.
Only units with scada ramp rates and a raise regulation bid that passes enablement criteria are returned.
Examples
>>> inputs_loader = _test_setup() >>> unit_data = UnitData(inputs_loader)
Required calls before calling get_scada_ramp_up_rates_of_raise_reg_units.
>>> volume_bids, price_bids = unit_data.get_processed_bids() >>> unit_data.add_fcas_trapezium_constraints()
Now the method can be called.
>>> unit_data.get_scada_ramp_up_rates_of_raise_reg_units().head() unit initial_output ramp_up_rate 36 BW01 433.425049 420.187683 37 BW02 532.075073 340.687866 40 CALL_B_1 344.100006 240.000000 41 CALL_B_2 349.100006 240.000000 74 ER01 542.699951 299.999542
- Returns
Columns:
Description:
unit
unique identifier for units, (as str)
initial_output
the output/consumption of the unit at
the start of the dispatch interval,
in MW, (as np.float64)
ramp_up_rate
the ramp up rate, in MW/h,
(as np.float64)
- Return type
pd.DataFrame
- Raises
MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.
-
get_contingency_services
()¶ Get the unit bid FCAS trapeziums for contingency services.
Examples
>>> inputs_loader = _test_setup() >>> unit_data = UnitData(inputs_loader)
Required calls before calling get_contingency_services.
>>> volume_bids, price_bids = unit_data.get_processed_bids() >>> unit_data.add_fcas_trapezium_constraints()
Now facs max availibility can be accessed.
>>> unit_data.get_contingency_services() unit service max_availability enablement_min low_break_point high_break_point enablement_max 0 APD01 raise_5min 34.0 0.0 0.0 0.0 0.0 1 APD01 raise_60s 34.0 0.0 0.0 0.0 0.0 2 APD01 raise_6s 17.0 0.0 0.0 0.0 0.0 3 ASNENC1 raise_5min 72.0 0.0 0.0 0.0 0.0 4 ASNENC1 raise_60s 49.0 0.0 0.0 0.0 0.0 .. ... ... ... ... ... ... ... 338 YWPS4 lower_5min 15.0 250.0 265.0 385.0 385.0 339 YWPS4 lower_60s 20.0 250.0 270.0 385.0 385.0 340 YWPS4 lower_6s 25.0 250.0 275.0 385.0 385.0 341 YWPS4 raise_60s 10.0 220.0 220.0 390.0 400.0 342 YWPS4 raise_6s 15.0 220.0 220.0 390.0 405.0 [225 rows x 7 columns]
- Returns
Columns:
Description:
unit
unique identifier of a dispatch unit,
(as str)
service
the contingency service being offered,
(as str)
max_availability
the maximum volume of the contingency
service, in MW, (as np.float64)
enablement_min
the energy dispatch level at which
the unit can begin to provide
the regulation service, in MW,
(as np.float64)
low_break_point
the energy dispatch level at which
the unit can provide the full
regulation service offered, in MW,
(as np.float64)
high_break_point
the energy dispatch level at which the
unit can no longer provide the
full regulation service offered, in MW,
(as np.float64)
enablement_max
the energy dispatch level at which the
unit can no longer provide any
regulation service, in MW,
(as np.float64)
- Return type
pd.DataFrame
- Raises
MethodCallOrderError – if the method is called before add_fcas_trapezium_constraints.
-
interconnectors¶
Classes:
|
Loads interconnector related raw inputs and preprocess them for compatibility with |
-
class
nempy.historical_inputs.interconnectors.
InterconnectorData
(raw_input_loader)¶ Loads interconnector related raw inputs and preprocess them for compatibility with
nempy.markets.SpotMarket
Examples
This example shows the setup used for the examples in the class methods. This setup is used to create a RawInputsLoader by calling the function _test_setup.
>>> import sqlite3 >>> from nempy.historical_inputs import mms_db >>> from nempy.historical_inputs import xml_cache >>> from nempy.historical_inputs import loaders
The InterconnectorData class requries a RawInputsLoader instance.
>>> con = sqlite3.connect('market_management_system.db') >>> mms_db_manager = mms_db.DBManager(connection=con) >>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache') >>> inputs_loader = loaders.RawInputsLoader(xml_cache_manager, mms_db_manager) >>> inputs_loader.set_interval('2019/01/01 00:00:00')
Create a InterconnectorData instance.
>>> interconnector_data = InterconnectorData(inputs_loader)
>>> interconnector_data.get_interconnector_definitions() interconnector from_region to_region min max link from_region_loss_factor to_region_loss_factor generic_constraint_factor 0 V-SA VIC1 SA1 -850.0 950.0 V-SA 1.0000 1.0000 1 1 N-Q-MNSP1 NSW1 QLD1 -264.0 264.0 N-Q-MNSP1 1.0000 1.0000 1 2 NSW1-QLD1 NSW1 QLD1 -1659.0 1229.0 NSW1-QLD1 1.0000 1.0000 1 3 V-S-MNSP1 VIC1 SA1 -270.0 270.0 V-S-MNSP1 1.0000 1.0000 1 5 VIC1-NSW1 VIC1 NSW1 -2299.0 2399.0 VIC1-NSW1 1.0000 1.0000 1 0 T-V-MNSP1 TAS1 VIC1 0.0 478.0 BLNKTAS 1.0000 0.9839 1 1 T-V-MNSP1 VIC1 TAS1 0.0 478.0 BLNKVIC 0.9839 1.0000 -1
- Parameters
inputs_manager (historical_spot_market_inputs.DBManager) –
Methods:
Returns inputs in the format needed to create interconnectors in the SpotMarket class.
Returns inputs in the format needed to set interconnector losses in the SpotMarket class.
-
get_interconnector_loss_model
()¶ Returns inputs in the format needed to set interconnector losses in the SpotMarket class.
Examples
>>> inputs_loader = _test_setup()
>>> interconnector_data = InterconnectorData(inputs_loader)
>>> loss_function, interpolation_break_points = interconnector_data.get_interconnector_loss_model()
>>> print(loss_function) interconnector link loss_function from_region_loss_share 0 V-SA V-SA <function InterconnectorData.get_interconnecto... 0.78 1 N-Q-MNSP1 N-Q-MNSP1 <function InterconnectorData.get_interconnecto... 0.66 2 NSW1-QLD1 NSW1-QLD1 <function InterconnectorData.get_interconnecto... 0.68 3 V-S-MNSP1 V-S-MNSP1 <function InterconnectorData.get_interconnecto... 0.67 4 VIC1-NSW1 VIC1-NSW1 <function InterconnectorData.get_interconnecto... 0.32 5 T-V-MNSP1 BLNKTAS <function InterconnectorData.get_interconnecto... 1.00 6 T-V-MNSP1 BLNKVIC <function InterconnectorData.get_interconnecto... 1.00
>>> print(interpolation_break_points) interconnector link loss_segment break_point 0 V-SA V-SA 1 -851.0 1 V-SA V-SA 2 -835.0 2 V-SA V-SA 3 -820.0 3 V-SA V-SA 4 -805.0 4 V-SA V-SA 5 -790.0 .. ... ... ... ... 599 T-V-MNSP1 BLNKVIC -80 -546.0 600 T-V-MNSP1 BLNKVIC -81 -559.0 601 T-V-MNSP1 BLNKVIC -82 -571.0 602 T-V-MNSP1 BLNKVIC -83 -583.0 603 T-V-MNSP1 BLNKVIC -84 -595.0 [604 rows x 4 columns]
Multiple Returns
loss_functions : pd.DataFrame
Columns:
Description:
interconnector
unique identifier of a interconnector,
(as str)
from_region_loss_share
The fraction of loss occuring in
the from region, 0.0 to 1.0,
(as np.float64)
loss_function
A function that takes a flow,
in MW as a float and returns the
losses in MW, (as callable)
interpolation_break_points : pd.DataFrame
Columns:
Description:
interconnector
unique identifier of a interconnector,
(as str)
loss_segment
unique identifier of a loss segment on
an interconnector basis, (as np.float64)
break_point
points between which the loss function
will be linearly interpolated, in MW
(as np.float64)
-
get_interconnector_definitions
()¶ Returns inputs in the format needed to create interconnectors in the SpotMarket class.
Examples
>>> inputs_loader = _test_setup()
>>> interconnector_data = InterconnectorData(inputs_loader)
>>> interconnector_data.get_interconnector_definitions() interconnector from_region to_region min max link from_region_loss_factor to_region_loss_factor generic_constraint_factor 0 V-SA VIC1 SA1 -850.0 950.0 V-SA 1.0000 1.0000 1 1 N-Q-MNSP1 NSW1 QLD1 -264.0 264.0 N-Q-MNSP1 1.0000 1.0000 1 2 NSW1-QLD1 NSW1 QLD1 -1659.0 1229.0 NSW1-QLD1 1.0000 1.0000 1 3 V-S-MNSP1 VIC1 SA1 -270.0 270.0 V-S-MNSP1 1.0000 1.0000 1 5 VIC1-NSW1 VIC1 NSW1 -2299.0 2399.0 VIC1-NSW1 1.0000 1.0000 1 0 T-V-MNSP1 TAS1 VIC1 0.0 478.0 BLNKTAS 1.0000 0.9839 1 1 T-V-MNSP1 VIC1 TAS1 0.0 478.0 BLNKVIC 0.9839 1.0000 -1
- Returns
Columns:
Description:
interconnector
unique identifier of a interconnector,
(as str)
to_region
the region that receives power
when flow is in the positive
direction, (as str)
from_region
the region that power is drawn
from when flow is in the
positive direction, (as str)
max
the maximum power flow on the
interconnector, in MW (as np.float64)
min
the minimum power flow on the
interconnector, if power can flow
neative direction then this will be
negative, in MW (as np.float64)
from_region_loss_factor
the loss factor between the from
end of the interconnector and the
regional reference node,
(as np.float)
to_region_loss_factor
the loss factor between the to
end of the interconnector and
the regional reference node,
(as np.float)
- Return type
pd.DataFrame
demand¶
Classes:
|
Loads demand related raw data and preprocess it for complatibility with the SpotMarket class. |
Functions:
|
Re-formats the AEMO MSS table DISPATCHREGIONSUM to be compatible with the SpotMarket class. |
-
class
nempy.historical_inputs.demand.
DemandData
(raw_inputs_loader)¶ Loads demand related raw data and preprocess it for complatibility with the SpotMarket class.
Examples
The DemandData class requries a RawInputsLoader instance.
>>> import sqlite3 >>> from nempy.historical_inputs import mms_db >>> from nempy.historical_inputs import xml_cache >>> from nempy.historical_inputs import loaders >>> con = sqlite3.connect('market_management_system.db') >>> mms_db_manager = mms_db.DBManager(connection=con) >>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache') >>> inputs_loader = loaders.RawInputsLoader(xml_cache_manager, mms_db_manager) >>> inputs_loader.set_interval('2019/01/01 00:00:00')
>>> demand_data = DemandData(inputs_loader)
>>> demand_data.get_operational_demand() region demand 0 NSW1 7598.63 1 QLD1 5904.65 2 SA1 1447.52 3 TAS1 1026.00 4 VIC1 4401.10
- Parameters
raw_inputs_loader –
Methods:
Get the operational demand used to determine the regional energy dispatch constraints.
-
get_operational_demand
()¶ Get the operational demand used to determine the regional energy dispatch constraints.
Examples
See class level example.
- Returns
Columns:
Description:
region
unique identifier of a market region,
(as str)
demand
the non dispatchable demand the region,
in MW, (as np.float64)
loss_function_demand
the measure of demand used when creating
interconnector loss functions, in MW,
(as np.float64)
- Return type
pd.DataFrame
-
nempy.historical_inputs.demand.
format_regional_demand
(DISPATCHREGIONSUM)¶ Re-formats the AEMO MSS table DISPATCHREGIONSUM to be compatible with the SpotMarket class.
Examples
>>> DISPATCHREGIONSUM = pd.DataFrame({ ... 'REGIONID': ['NSW1', 'SA1'], ... 'TOTALDEMAND': [8000.0, 4000.0], ... 'DEMANDFORECAST': [10.0, -10.0], ... 'INITIALSUPPLY': [7995.0, 4006.0]})
>>> regional_demand = format_regional_demand(DISPATCHREGIONSUM)
>>> print(regional_demand) region demand loss_function_demand 0 NSW1 8000.0 8005.0 1 SA1 4000.0 3996.0
constraints¶
Classes:
|
Loads generic constraint related raw inputs and preprocess them for compatibility with |
-
class
nempy.historical_inputs.constraints.
ConstraintData
(raw_inputs_loader)¶ Loads generic constraint related raw inputs and preprocess them for compatibility with
nempy.markets.SpotMarket
Examples
This example shows the setup used for the examples in the class methods. This setup is used to create a RawInputsLoader by calling the function _test_setup.
>>> import sqlite3 >>> from nempy.historical_inputs import mms_db >>> from nempy.historical_inputs import xml_cache >>> from nempy.historical_inputs import loaders
The InterconnectorData class requries a RawInputsLoader instance.
>>> con = sqlite3.connect('market_management_system.db') >>> mms_db_manager = mms_db.DBManager(connection=con) >>> xml_cache_manager = xml_cache.XMLCacheManager('test_nemde_cache') >>> inputs_loader = loaders.RawInputsLoader(xml_cache_manager, mms_db_manager) >>> inputs_loader.set_interval('2019/01/01 00:00:00')
Create a InterconnectorData instance.
>>> constraint_data = ConstraintData(inputs_loader)
>>> constraint_data.get_rhs_and_type_excluding_regional_fcas_constraints() set rhs type 0 #BANN1_E 32.000000 <= 1 #BNGSF2_E 3.000000 <= 2 #CROWLWF1_E 43.000000 <= 3 #CSPVPS1_E 29.000000 <= 4 #DAYDSF1_E 0.000000 <= .. ... ... ... 704 V_OWF_NRB_0 10000.001000 <= 705 V_OWF_TGTSNRBHTN_30 10030.000000 <= 706 V_S_NIL_ROCOF 812.280029 <= 707 V_T_NIL_BL1 478.000000 <= 708 V_T_NIL_FCSPS 425.154024 <= [574 rows x 3 columns]
- Parameters
inputs_manager (historical_spot_market_inputs.DBManager) –
Methods:
Get the violation costs of non-generic constraint groups.
Get constraint details needed for setting FCAS requirements.
Get the lhs coefficients of interconnectors.
Get the lhs coefficients of regions.
Get the rhs values and types for generic constraints.
Get the rhs values and types for generic constraints, excludes regional FCAS constraints.
Get the lhs coefficients of units.
Get the violation costs for generic constraints.
Get a boolean indicating if the over constrained dispatch rerun process was used for this interval.
-
get_rhs_and_type_excluding_regional_fcas_constraints
()¶ Get the rhs values and types for generic constraints, excludes regional FCAS constraints.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_rhs_and_type_excluding_regional_fcas_constraints() set rhs type 0 #BANN1_E 32.000000 <= 1 #BNGSF2_E 3.000000 <= 2 #CROWLWF1_E 43.000000 <= 3 #CSPVPS1_E 29.000000 <= 4 #DAYDSF1_E 0.000000 <= .. ... ... ... 704 V_OWF_NRB_0 10000.001000 <= 705 V_OWF_TGTSNRBHTN_30 10030.000000 <= 706 V_S_NIL_ROCOF 812.280029 <= 707 V_T_NIL_BL1 478.000000 <= 708 V_T_NIL_FCSPS 425.154024 <= [574 rows x 3 columns]
- Returns
Columns:
Description:
set
the unique identifier of the constraint set,
(as str)
type
the direction of the constraint >=, <= or
=, (as str)
rhs
the right hand side value of the constraint,
(as np.float64)
- Return type
pd.DataFrame
-
get_rhs_and_type
()¶ Get the rhs values and types for generic constraints.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_rhs_and_type() set rhs type 0 #BANN1_E 32.000000 <= 1 #BNGSF2_E 3.000000 <= 2 #CROWLWF1_E 43.000000 <= 3 #CSPVPS1_E 29.000000 <= 4 #DAYDSF1_E 0.000000 <= .. ... ... ... 704 V_OWF_NRB_0 10000.001000 <= 705 V_OWF_TGTSNRBHTN_30 10030.000000 <= 706 V_S_NIL_ROCOF 812.280029 <= 707 V_T_NIL_BL1 478.000000 <= 708 V_T_NIL_FCSPS 425.154024 <= [709 rows x 3 columns]
- Returns
Columns:
Description:
set
the unique identifier of the constraint set,
(as str)
type
the direction of the constraint >=, <= or
=, (as str)
rhs
the right hand side value of the constraint,
(as np.float64)
- Return type
pd.DataFrame
-
get_unit_lhs
()¶ Get the lhs coefficients of units.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_unit_lhs() set unit service coefficient 0 #BANN1_E BANN1 energy 1.0 1 #BNGSF2_E BNGSF2 energy 1.0 2 #CROWLWF1_E CROWLWF1 energy 1.0 3 #CSPVPS1_E CSPVPS1 energy 1.0 4 #DAYDSF1_E DAYDSF1 energy 1.0 ... ... ... ... ... 5864 V_ARWF_FSTTRP_5 ARWF1 energy 1.0 5865 V_MTGBRAND_33WT MTGELWF1 energy 1.0 5866 V_OAKHILL_TFB_42 OAKLAND1 energy 1.0 5867 V_OWF_NRB_0 OAKLAND1 energy 1.0 5868 V_OWF_TGTSNRBHTN_30 OAKLAND1 energy 1.0 [5869 rows x 4 columns]
- Returns
Columns:
Description:
set
the unique identifier of the constraint set
to map the lhs coefficients to, (as str)
unit
the unit whose variables will be mapped to
the lhs, (as str)
service
the service whose variables will be mapped to the lhs, (as str)
coefficient
the lhs coefficient (as np.float64)
- Return type
pd.DataFrame
-
get_interconnector_lhs
()¶ Get the lhs coefficients of interconnectors.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_interconnector_lhs() set unit service coefficient 0 #BANN1_E BANN1 energy 1.0 1 #BNGSF2_E BNGSF2 energy 1.0 2 #CROWLWF1_E CROWLWF1 energy 1.0 3 #CSPVPS1_E CSPVPS1 energy 1.0 4 #DAYDSF1_E DAYDSF1 energy 1.0 ... ... ... ... ... 5864 V_ARWF_FSTTRP_5 ARWF1 energy 1.0 5865 V_MTGBRAND_33WT MTGELWF1 energy 1.0 5866 V_OAKHILL_TFB_42 OAKLAND1 energy 1.0 5867 V_OWF_NRB_0 OAKLAND1 energy 1.0 5868 V_OWF_TGTSNRBHTN_30 OAKLAND1 energy 1.0 [5869 rows x 4 columns]
- Returns
Columns:
Description:
set
the unique identifier of the constraint set
to map the lhs coefficients to, (as str)
interconnetor
the interconnetor whose variables will be
mapped to the lhs, (as str)
coefficient
the lhs coefficient (as np.float64)
- Return type
pd.DataFrame
-
get_region_lhs
()¶ Get the lhs coefficients of regions.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_region_lhs() set region service coefficient 0 F_I+LREG_0120 NSW1 lower_reg 1.0 1 F_I+LREG_0120 QLD1 lower_reg 1.0 2 F_I+LREG_0120 SA1 lower_reg 1.0 3 F_I+LREG_0120 TAS1 lower_reg 1.0 4 F_I+LREG_0120 VIC1 lower_reg 1.0 .. ... ... ... ... 478 F_T+NIL_WF_TG_R5 TAS1 raise_reg 1.0 479 F_T+NIL_WF_TG_R6 TAS1 raise_6s 1.0 480 F_T+NIL_WF_TG_R60 TAS1 raise_60s 1.0 481 F_T+RREG_0050 TAS1 raise_reg 1.0 482 F_T_NIL_MINP_R6 TAS1 raise_6s 1.0 [483 rows x 4 columns]
- Returns
Columns:
Description:
set
the unique identifier of the constraint set
to map the lhs coefficients to, (as str)
region
the region whose variables will be mapped
to the lhs, (as str)
service
the service whose variables will be mapped
to the lhs, (as str)
coefficient
the lhs coefficient (as np.float64)
- Return type
pd.DataFrame
-
get_fcas_requirements
()¶ Get constraint details needed for setting FCAS requirements.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_fcas_requirements() set service region type volume 0 F_I+LREG_0120 lower_reg NSW1 >= 120.000000 1 F_I+LREG_0120 lower_reg QLD1 >= 120.000000 2 F_I+LREG_0120 lower_reg SA1 >= 120.000000 3 F_I+LREG_0120 lower_reg TAS1 >= 120.000000 4 F_I+LREG_0120 lower_reg VIC1 >= 120.000000 .. ... ... ... ... ... 478 F_T+NIL_WF_TG_R5 raise_reg TAS1 >= 62.899972 479 F_T+NIL_WF_TG_R6 raise_6s TAS1 >= 67.073327 480 F_T+NIL_WF_TG_R60 raise_60s TAS1 >= 83.841637 481 F_T+RREG_0050 raise_reg TAS1 >= -9950.000000 482 F_T_NIL_MINP_R6 raise_6s TAS1 >= 35.000000 [483 rows x 5 columns]
- Returns
Columns:
Description:
set
unique identifier of the requirement set,
(as str)
service
the service or services the requirement set
applies to (as str)
region
the regions that can contribute to meeting a
requirement, (as str)
volume
the amount of service required, in MW,
(as np.float64)
type
the direction of the constrain ‘=’, ‘>=’ or
’<=’, optional, a value of ‘=’ is assumed if
the column is missing (as str)
- Return type
pd.DataFrame
-
get_violation_costs
()¶ Get the violation costs for generic constraints.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_violation_costs() set cost 0 #BANN1_E 5220000.0 1 #BNGSF2_E 5220000.0 2 #CROWLWF1_E 5220000.0 3 #CSPVPS1_E 5220000.0 4 #DAYDSF1_E 5220000.0 .. ... ... 704 V_OWF_NRB_0 5220000.0 705 V_OWF_TGTSNRBHTN_30 5220000.0 706 V_S_NIL_ROCOF 507500.0 707 V_T_NIL_BL1 5220000.0 708 V_T_NIL_FCSPS 435000.0 [709 rows x 2 columns]
- Returns
Columns:
Description:
set
the unique identifier of the constraint set
to map the lhs coefficients to, (as str)
cost
the cost to the objective function of
violating the constraint, (as np.float64)
- Return type
pd.DataFrame
-
get_constraint_violation_prices
()¶ Get the violation costs of non-generic constraint groups.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.get_constraint_violation_prices() {'regional_demand': 2175000.0, 'interocnnector': 16675000.0, 'generic_constraint': 435000.0, 'ramp_rate': 16747500.0, 'unit_capacity': 5365000.0, 'energy_offer': 16457500.0, 'fcas_profile': 2247500.0, 'fcas_max_avail': 2247500.0, 'fcas_enablement_min': 1015000.0, 'fcas_enablement_max': 1015000.0, 'fast_start': 16385000.0, 'mnsp_ramp_rate': 16747500.0, 'msnp_offer': 16457500.0, 'mnsp_capacity': 5292500.0, 'uigf': 5582500.0, 'voll': 14500.0}
- Returns
- Return type
dict
-
is_over_constrained_dispatch_rerun
()¶ Get a boolean indicating if the over constrained dispatch rerun process was used for this interval.
Examples
>>> inputs_loader = _test_setup()
>>> unit_data = ConstraintData(inputs_loader)
>>> unit_data.is_over_constrained_dispatch_rerun() False
- Returns
- Return type
bool