UConn needs a mechanism to efficiently access FAMIS metrics that provides near real-time data without impacting performance for users of FAMIS proper. Users will want to be able to access these metrics in tabular and visual formats and on desktops or mobile devices.
Trial & Error
Initially a prototype was built (for the UCONN Now Crew Supervisor set of Metrics) that made a direct connection to FAMIS proper in Santa Clara. This provided exactly real-time data but the performance was unacceptable for a modern user interface. The query itself took ~2.5 seconds to execute on a UCONN copy of the data and ~11 seconds to execute on FAMIS proper. Additionally there was network latency between the UConn php server and FAMIS proper that ranged from ~30 – 45 seconds.
An 11 second response time to extract the real-time data was likely acceptable but the delay related to suspected network issues was not. Although this problem could have been tracked there is additional benefit to creating METRICS_ tables on the UConn servers. A consistent interval, initially every 5 minutes, for polling FAMIS proper means a more controlled and predictable performance implication to Santa Clara. Additionally the collection of specific metrics will provide a mechanism for storing point-in-time views that are currently challenging to extract from FAMIS.
Whiteboard the Solution
All tables storing processed metrics from FAMIS proper will be prefixed with METRICS_. For the top row of metrics displayed in the dashboard above the following table is created:
The TIME_RECORDED field will be used by the dashboard to determine the most recent data and also to purge the data so the table does not grow too large.
Defining the link to Santa Clara
In order to collect real-time metrics there needs to be a database link to the Accruent servers in Santa Clara. The link to storprd is defined as:
(ADDRESS = (PROTOCOL = TCP)(HOST = fdb03.accruentondemand.com)(PORT = 1521))
(SERVICE_NAME = STORPRD.ACCRUENTONDEMAND.COM)
Need to create a stored procedure to query FAMIS proper and process and store metrics in the UConn Shadow database.
First create a supporting procedure to determine the latest time the metrics were updated:
create or replace PROCEDURE MAX_TIME_CREW_METRICS
LAST_TIME OUT VARCHAR2
SELECT to_char(max(time_recorded),’mm/dd/yyyy hh:mi:ss’) into LAST_TIME from METRICS_CURRENT_CREWS;
The main stored procedure first stores the date and time of the last update, then adds the new data, checks to see the new data is properly added and then deletes anything older:
create or replace PROCEDURE METRICS_COLLECT_CURRENT_CREWS AS
insert into shadow_copy_log values(shadow_copy_log_seq.nextval, ‘METRICBUILD’, ‘BUILDSTART’, ”, ‘STARTMETRICCURCREW’, ‘noerror’, sysdate, to_char(sysdate,’YYYYMMDD:HH24:MI:SS’) );
INSERT INTO METRICS_CURRENT_CREWS
SELECT nvl(crew,’UNSPECIFIED’) crew, SUM(top_priority) priority,
(SELECT crew, SUM(DECODE(priority,1,1,0)) top_priority,
(SELECT crew, priority,
WO_STATUS IN (‘OPEN’, ‘ASSIGNED’,’PARTS’)
) group by crew
SELECT crew, 0 top_priority,
0 class_5 ,
SUM(DECODE(SIGN(enter_date – (TRUNC(sysdate-1) + 15/24)),1,1,0)) new_ct,
WHERE enter_date > sysdate-7
group by crew
SELECT crew, 0 top_priority,
WHERE wo_close_date > sysdate-7 group by crew) group by crew order by crew;
IF last_metric_insert <> this_metric_insert THEN
DELETE FROM METRICS_CURRENT_CREWS WHERE TIME_RECORDED <> to_date(this_metric_insert,’mm/dd/yyyy hh:mi:ss’);
insert into shadow_copy_log values(shadow_copy_log_seq.nextval, ‘METRICBUILD’, ‘BUILDFINISH’, ”, ‘FINISHMETRICCURCREW’, ‘noerror’, sysdate, to_char(sysdate,’YYYYMMDD:HH24:MI:SS’) );
Need to set up the DBMS_Scheduler to get updated metrics, initially every 5 minutes. The jobs associated with collecting metrics are prefixed with UPDATE_CURRENT_. Below are the details of the definition of a sample job:
Modify the prototyped dashboard to read from UConn Shadow METRICS_ tables instead of FAMIS proper