 Research article
 Open Access
 Published:
Combining photogrammetry and robotic total stations to obtain dimensional measurements of temporary facilities in construction field
Visualization in Engineering volume 1, Article number: 4 (2013)
Abstract
Background
Threedimensional (3D) modeling and visualization of temporary site facilities is instrumental in revealing potential space conflicts and refining time and cost estimates. This research focuses on implementation of photobased 3D modeling in a timedependent, dynamicallychanging context.
Methods
We propose a costeffective modeling technique to obtain dynamic dimension measurements of a moving object. The methodology resulting from integrating photobased 3D modeling and robotic total station tracking technologies better caters to the application needs of visualization and measurement in construction that are critical to operational safety and structural accuracy. The computational foundation of photogrammetry is first addressed then the modeling procedure and the system design described.
Results
In a module assembly yard, a rigging system being lifted by a mobile crane was identified as the moving object. The length and the length changes of twelve slings on a newlyengineered rigging system at different stateswere measured in order to ensure quality and safety.
Conclusion
The proposed technique relies on utilizing two robotic total stations and three cameras and provides a simple, safe and effective solution to monitor the dimensional changes of a temporary facility in the construction field.
Background
Photogrammetric surveying has been widely applied in medical surgery, 3D modeling, engineering, manufacturing and map production. Virtual reality (VR) and augmented reality (AR) researches have resorted to photos as the most straightforward and costeffective means for field data collection in the construction management domain (Kamat et al. 2011). Due to the computational complexity of photogrammetric surveying, construction engineering and management researchers attempted to reduce the number of photos used by imposing geometric constraints and automating the modeling process based on pattern recognition and feature detection (GolparvarFard et al. 2009). (ElOmari and Moselhi 2008) integrated both photogrammetry and laser scanning to reduce time required for collecting data and modeling. (GolparvarFard et al. 2011) used site photos to generate the point cloud, then match and pair the images to generate the asplanned and asbuilt models enabled by the technique of structure from motion (SfM). (GolparvarFard et al. 2011) further reduced the modeling time and cost by generating the point cloud from both photos and SfM. Successful point cloud applications were demonstrated in (GolparvarFard et al. 2011) and (Bhatla et al. 2012). The point cloud based application is aimed to reduce the effort in 3D asbuilt modeling. However, the model object must be stationary during the laser scanning process while removing the noise data (redundant or irrelevant information) requires considerable time and expertise. Thus, “pointcloud” based techniques are not suitable for modeling a particular moving object on a near realtime basis in the field.
Research has extended photogrammetry into videogrammetry; for instance, (Fathi and Brilakis 2012) measured dimensions of a roof using two video cameras based on SfM. However, extensive video postprocessing effort is necessary to match the time stamp of each video frame recorded by each camera. More than three stationary site control points are required in each photo frame for 3D modeling. To reduce the minimal quantity of the required control points from three to two, a simplified photogrammetryenabled AR approach (PhotoAR) was applied to assist in crane allocation and bored pile construction sequencing on a congested building site (Siu and Lu 2009,2010,2011). In short, considering a lifesize moving object found in the construction field (like the rigging system used in the case study of the present research), the inclusion of multiple fixed control points in each photo frame is infeasible. The above limitations identified in current 3D asbuilt methods have inspired us to develop an alternative solution which directly tracks and surveys target points by use of two synchronized RTS units, thus automatically providing the georeferenced inputs as needed for photobased 3D modeling.
At present, using total station is the common practice to determine positions in the site space, instead of applying traverse and leveling methods in traditional surveying (Kavanagh 2009). Mainstream surveying research focuses on improving the accuracy of collected data by computing, for instance, through the least square adjustment algorithm (King 1997). On the other hand, the stateoftheart robotic total station (RTS) adds tracking and automation capabilities to enhance positioning and surveying applications in the field, including building settlement monitoring, bridge deflection and tunnel boring machine guidance (Shen et al. 2011). Currently, the drawbacks of robotic total stations include the high investment and application costs and its limited capability to track only one point at a particular time.
Physical dimensions of static building products that are not safely assessable can be surveyed through use of photogrammetry for quality control purposes (Dai and Lu 2010). It takes significant effort to process images to build 3D models of a static object, while dynamic changes of the object’s geometric configurations over time are ignored. A PhotoAR scene, consisting of existing plant facilities plus virtual models of temporary facilities, can be linked with a scheduled event on an industrial construction project. The AR scene is instrumental in revealing and analyzing potential workface space conflicts and refining the estimate of productivity for construction scheduling. In previous research, the PhotoAR requires at least two control points with known coordinates fixed on the site explicitly visible in each photo. The scale of the AR scene can be fixed by using the two stationary control points (Siu and Lu 2009).
This paper reports our development and application of a modeling methodology for measuring physical dimensions of a moving object and checking any changes in those dimensions during a dynamic process in the construction field. On site, the stationary control points are usually located on the ground. It is difficult to include at least two groundfixed points in each photo frame, especially when the large object being tracked is being lifted and moved from a source location to a destination location in the field. Therefore, this research is intended to model a moving object based on tracking a minimum quantity of dynamic control points on the object. As such, reliable dimensional measurements at one particular moment can be obtained with the least cost of equipment purchase and use. PhotoAR has been further enhanced through synchronizing cameras and RTS to track dynamic points on a moving object. The proposed timedependent 3D modeling methodology is cost effective specifically for dynamic applications: physical dimensions of the moving object being modeled can be determined simply by processing photos from multiple cameras supplemented by point tracking results by two robotic total station units at a particular time. This would better cater to the application needs in industrial construction in terms of modeling dynamic temporary facilities which are critical to construction safety and productivity performances. The application background for our field experiments is given as follows.
As Canada’s leading producer of oil, gas, and petrochemicals, Alberta is home to four ethanecracking plants, including two of the world’s largest, with combined annual capacity to produce 8.6 billion pounds of ethylene. In the foreseeable future, new refining capacity will be added to produce ethane from bitumen upgrading, which will directly source feedstock from downstream oil sands mining (Government of Alberta 2010). New construction and turnaround activities at industrial process plants consume substantial resources and involve diverse stakeholders who work closely towards delivering a project under a finite time window and a tight cost budget. In general, work items such as a pipe spool, a valve or a storage tank undergo a sequence of tasks which take place in a fabrication shop, at a module yard and on an industrial site. Each task is conducted by a specialist crew in a confined work space with the assistance of temporary facilities and equipment such as scaffolding, rigging system and cranes. A newly engineered rigging system was designed by a major industrial contractor to handle super modules with a maximum 160ton lift capacity. The rigging frame system is made of steel and subjected to bending under loadings (Westover et al. 2012). Lengthadjustable slings connect the rigging frame and an overhead plate to form a rigging system. The sling length measurements are critical to balance the frame. This ensures the load is evenly spread and carried by each sling. However, direct measurement of sling lengths such as using measurement tape is not feasible due to safety hazards and the dynamic movement of the rigging system.
In the remainder of this paper, the computing foundation of photogrammetry is briefly addressed. Then we describe the modeling procedure, the system design integrating the use of multiple cameras and two robotic total stations, and the field implementation to check sling lengths in modeling a rigging system in a module assembly yard. Field testing results are deliberated and analyzed. Conclusions are drawn based on discussions of the experimental findings and future research.
Computing foundation of photogrammetry
Threepoint absolute modeling approach
This present research focuses on the implementation of close range photogrammetry in a timedependent, dynamicallychanging context. The collinearity equations, given in Eqs. (1) and (2), constitute the mathematical foundations of photogrammetry to determine (1) internal parameters of camera, (2) image and object coordinates, and (3) error adjustment. The direct linear transform algorithm was formalized by (AbdelAziz and Karara 1971) based on the collinearity equations, and has been further developed to simplify the transformation between the image pixel frame and the object space coordinates in digital photogrammetry (Bhatla et al. 2012; Mikhail et al. 2001; McGlone et al. (2004)). Basically, the camera’s position and orientation parameters with respect to an object space coordinate system is determined by solving six unknowns, namely: three perspective center coordinates and three orientation parameters: (X_{ C }, Y_{ C }, Z_{ C }, ω, φ, κ). As two collinearity equations can be written for one particular point, relating its imaging point (x, y) in the photo frame to its three coordinates (X_{ P }, Y_{ P }, Z_{ P }) in the space frame, three different points with known (x, y) and (X_{ P }, Y_{ P }, Z_{ P }) can define six equations, thus resulting in unique solutions of six unknowns (X_{ C }, Y_{ C }, Z_{ C }, ω, φ, κ) (Figure 1). After the camera’s position and orientation parameters are determined, any new point whose (x, y) in the photo frame are known while (X_{ P }, Y_{ P }, Z_{ P }) in the object space are unknown, (X_{ P }, Y_{ P }, Z_{ P }) can be expressed as functions of (x, y) by transforming Eqs. (1) and (2); if two photos taken from different perspectives both capture the same point, the position and orientation of the two camera stations are all determined, then four equations having three unknowns can be solved by least square adjustment techniques in order to derive the most likely values of the object space coordinates for the new point (Figure 2).
Where:
x, y, is image coordinate
x_{ 0 }, y_{ 0 }, is principal point coordinate
c_{ x }, c_{ y }, is principal distance scaled by λ in x and y directions (c_{ x } = cλ_{ x } and c_{ y } = cλ_{ y })
c, is principal distance
λ, is scale factor
X_{ P }, Y_{P,}Z_{ P }, is object space coordinate of the point
X_{ C }, Y_{ C }, Z_{ C }, is object space coordinate of the perspective center
ω, φ, κ, is rotated angles with respect to x, y and z axis
δx, δy, is total lens distortion in x and y directions determined from camera calibration.
Fivepoint relative modeling approach
When the only objective of a particular field application is to take measurements and build a threedimensional (3D) model of an object, instead of fixing the exact position of the object in space, then the three points with known coordinates in the field space are not needed. Instead, the relative orientation parameters can be determined from the (x, y) coordinates of a minimum of five points on the object in a minimum of two photos based on Eqs. (1) and (2), without requiring the coordinates of any point in the field. The five control points give five sets of collinearity equation with five degrees of freedom. The coordinates and orientations of two cameras (X_{ C }, Y_{ C }, Z_{ C }, ω, φ, κ) are determined with respect to the model coordinate system (Figure 3), whose origin o aligns with the principal point of one of the cameras. As such, the coordinates (X_{ P }, Y_{ P }, Z_{ P }) of the object in the model frame can be determined by pairing imaging points (x, y) on two photos. The 3D model in relative measurements can be built (Figure 4). A scale bar, which can be the absolute measurement of a line section on the object, is used to scale the relative measurement of a dimension of the object in an absolute unit of measurement. Elaboration of the complete mathematical algorithms for the above fivepointtwophoto approach can be found in (Dai and Lu 2013).
The proposed dynamic modeling approach essentially follows the “fivepoint relative modeling approach” with the assistance of two synchronized RTS units, at a particular moment, a scale bar is automatically fixed on the object by tracking two points. In other words, the length of the scale bar is subject to change over time. In the long run, the proposed methodology and system design can be readily extended to implement a timedependent “three point absolute modeling” approach by synchronizing three RTS units to track the absolute coordinates of three points on the moving object.
Methods
Modeling procedure
The timedependent dynamic modeling procedure is given in Figure 5. A minimum of two cameras plus two robotic total stations are required. Note the realtime kinematic based global positioning system (RTKGPS) enabled by satellite navigation offers an alternative technology to RTS, but its reliability and accuracy in positioning a specific moving object is questionable. In contrast, the robotic total station (RTS) is the most accurate onsite survey instrument at present and is capable of providing millimeterlevel accuracy for coordinate measurement of target points (Leica Geosystems 2012). It is possible to track multiple points on a moving object with multiple RTS units. As each total station can only track one target at a time, two RTS units are synchronized to simultaneously track and position two points through wireless automation command control.
The price of a robotic total station is around 80–100 times the price of a camera. More than two cameras can be utilized to enhance the quality of photogrammetry modeling. Two cameras and two RTS units are initialized by synchronizing their internal clocks. The images are taken once the object falls entirely into the fields of vision of the cameras. Each RTS unit is programmed to automatically track and survey one control point on the moving object being modeled. Via wireless communication networks, time stamped survey data and digital photos are collected in the field for immediate processing and modeling on a laptop. At time point T_{0}, the photos and the coordinates of two points on a moving object are captured by the two cameras and two RTS units, respectively. Time stamp checking is then performed to assure synchronization quality, which guarantees the data are captured at approximately the same time. In case the time differences between the photos and the control point survey data exceed the preset allowance (e.g. 1 second in the current case), then the photos and the survey data are discarded for ensuing 3D modeling. Two images taken from different angles are used to build the 3D model, by pairing and matching five common points in the two photos. The 3D model coordinate system is set up by selecting an arbitrary origin. The photobased 3D model is set on an arbitrary scale. The image and object coordinates are then evaluated based on the collinearity equations. The scale of the model is determined based on the coordinates of the two points fixed by the two robotic total station units. Dimensions of the object can then be obtained at T_{0}. Next, the above tracking process is repeated on the next event until the tracking process is terminated. The details of the available software for building the photobased models are discussed in the following section.
The timedependent dynamic tracking method is proposed to monitor a moving object in the field space. Changes in measurements between the two time events (such as length, area and volume) can be evaluated from the resulting models for construction application purposes. As illustrated in Figure 6, a photobased 3D model is generated with an arbitrary scale at time T_{0}. The origin o is specified at the corner of the box. The coordinates of the points a (0, 0, 10) and b (10, 0, 10) are fixed by employing robotic total station units. The length between a and b (10 cm) can be calculated and then used to fix the model scale. The geometric properties of the object, such as the area (A_{01}: 100 cm^{2}, A_{02}: 200 cm^{2}) and volume (V_{0}: 2000 cm^{3}), can be determined. At T_{1}, the object is remodeled following location and dimension changes as shown in Figure 7. Similarly, point coordinates of a (10, 10, 10) and b (30, 10, 10) are used to scale the photobased 3D modal at T_{1}. The geometric properties (A_{11}: 200 cm^{2}; A_{12}: 200 cm^{2}; V_{1}: 4000 cm^{3}), along with any changes (ΔA_{1} = 100 cm^{2}, Δ A_{2} = 0 cm^{2}, Δ V = 2000 cm^{3}), from T_{0} and T_{1} can be evaluated.
The advantages in applying photogrammetric techniques compared to traditional surveying include (1) noncontact measurements, (2) relatively short turnaround cycle of data processing and modeling and (3) measuring multiple points at one time. This application is particularly useful when directcontactbased measurement is not viable due to the dynamic nature of a process and potential safety hazards. The following section presents the field application of the proposed timedependent dynamic 3D modeling method on a rigging system engineered for lifting heavy industrial modules.
Cameras and photomodeler software
The system solution requires the integration of hardware and software. The software, PhotoModeler 2012 (EOS Systems 2011), was used to produce the photobased 3D model with an arbitrary scale by importing three photos from different angles by three cameras and the coordinates of two control points on the rigging system, all taken simultaneously.
To achieve high quality photogrammetry modeling, three Canon T3i cameras were purchased and calibrated (CAD $699.98 each in May 2012). The internal clocks of the three cameras were synchronized such that the time stamps of the photos taken at the same time fell within a small margin of difference. In order to shorten the turnaround time during actual site testing, EyeFi cards were used to facilitate the wireless photo transfer from the cameras to a laptop. The card models are Connect×2 4GB (CAD $39.99) and Pro×2 8GB (CAD $99.99). The EyeFi card functions as a normal SD card for camera image storage with WiFi signal receiver and transmitter embedded (EyeFi 2012). In the field testing, the time of transferring three high resolution photos (5184×3456; 4.58 MB each) was under 30 seconds with the Internet or under 60 seconds with an “ad hoc” wireless network; the communication range of the EyeFi card was 27 m outdoors and 13 m indoors, respectively.
Camera calibration is essential to determine lens distortions (radial and decentering distortion), the principle distance and the location of principle point in the photo frame, referring to (δx, δy), c and (x_{ 0 }, y_{ 0 }) in Eqs. (1) and (2), respectively. System errors due to lens distortions can be regulated and corrected through camera calibration. High quality multiplesheet calibration methods provided by PhotoModeler were used, as shown in Figure 8. There were 125 coded targets (5 coded targets on each calibration sheet, and 25 sheets were printed) and coded targets were automatically detected by PhotoModeler. Three cameras were successfully calibrated and the parameters were stored in PhotoModeler for system error adjustment.
Once object images taken from different perspectives are imported to PhotoModeler, building the model involves human interaction, to pick and match feature points in different photos. Human error in the modeling process is accessed in terms of residual error. The computational algorithm used in PhotoModeler, called bundle adjustment, essentially applies the colliearity equations (Eqs. 1 and 2) to simultaneously fix (1) camera orientations, (2) object and image point coordinates and (3) the residual error, with an objective to minimize the residual error.
Robotic total stations
The conventional approach to fix the scale of the photobased 3D model is to use a scale bar mounted or marked on the object being modeled. However, the scale bar may not be visible in each photo if the object is moving. The reliability and accuracy of using realtime kinematics global positioning system (RTKGPS) to position a specific moving object are questionable. In contrast, the robotic total station (RTS) is the most accurate survey instrument and is capable of providing point survey data of millimeterlevel accuracy (Leica Geosystems 2012). As one total station can only track and measure one target at a time, two RTS units need to be synchronized in order to simultaneously survey two points on a moving object through wireless automation command control.
The models of the two RTS units used in field testing were Leica TS15I and Leica TCRP1203+. A tablet program was developed to synchronize the operations of the two RTS units during site experiments. The two RTS units were synchronized by the builtin protocols (COM_{4} and COM_{6}) through an application programming interface (API) provided by Leica. In addition to synchronization, the two RTS units must be initialized in the object coordinate system. To set up the coordinate system, RTS_{1} is assigned as the origin of the coordinate system. The direction pointing from RTS_{1} to a fixed point in the ground is defined as north. The east direction can be established as a cross product of the north and the zenith (perpendicular to the ground). Each RTS unit automatically locks on a reflective glass prism and track its location. The coordinates of the two control points are simultaneously taken, which is controlled by the tablet program. The coordinates of the two control points at T_{i} are prerequisite inputs to calculate the distance between the two points by Eq. (3), which is used to take a distance measurement on the photobased model in an absolute unit of measure.
The three cameras were operated manually by three persons who tracked the moving object and continuously took photos in our field testing. In the near future, automation control of multiple cameras can be implemented in a way similar to the method used for the robotic total stations.
The focal lengths of the cameras were set to the allowable minimum value (18 mm) in order to capture the whole structure in the yard. The cameras were placed as far away from one another as possible in order to enhance the quality of photogrammetry modeling and reduce the residual error due to human factors. In general, more reliable results can be obtained if the angle between the perspectives of two cameras is close to 90°. Figures 9 and 10 show two different states of the rigging at T_{0} and T_{1}, respectively. The photobased 3D models can be built by using three photos simultaneously taken from different perspectives. The two RTS units track the two end points of a particular sling on the rigging system, as denoted by red arrows. In the end, the lengths of all the slings can then be fixed from the resultant models at T_{0} and T_{1}, respectively.
Field testing
In a module assembly yard, a rigging system being lifted by a mobile crane was identified as the testbed object of practical size and complexity (Figure 11). The dimensions of the entire structure measured approximately 20 m wide, 20 m tall and 5 m deep. On each side of the rigging, six slings connected a horizontal beam and an overhead connection plate. The objectives of the testing include (1) measuring sling lengths of the moving rigging frame to ensure the main beam is leveled geometrically at a particular time (T_{i}), and (2) checking changes in sling lengths to examine if the rigging frame is stable during lifting. In addition, the quality of sling fabrication can be checked by comparing field measures against design specifications. In the field, sling lengths can also be adjusted by a limited magnitude in order to match the design. This will ensure proper tensions in all the slings when the rigging is used to handle the heavy load of a full module.
Surveying markers and glass prisms were placed on the rigging system to facilitate photogrammetric modeling and automatic RTS tracking. The prism placement on the rigging system and identifications of each sling are shown in Figure 12. In the field experiments, we placed two mini prisms at the two end points of sling B(L.) on the “Yplate” and the beam, respectively.
Site constraints
The weather during field experiments was cloudy and windy. On site, temporary facilities such as a dumpster and the mobile crane were present likely to block the view of the rigging frame. Therefore, to allow full coverage of the whole frame structure in all the photos, 115 feet (35 meters) clearance between the cameras and the rigging frame was required. Additionally, to satisfy perspectiveangle requirements between two photos in photogrammetry survey, a certain distance was maintained between the three cameras. Photos were collected using wireless network transfer and were selected for ensuing modeling if time stamp data recorded in the photos were checked to fall within one second of the RTS survey records. Then, on a laptop in the field, the photobased models were built by manually matching the conjugate center points of markers on the rigging frame in the PhotoModeler software platform.
The rigging frame swung under the wind load, making it difficult for total station units to track the position of the markers. By synchronizing robotic total station (RTS) units, two positions on a moving object were automatically tracked and accurately fixed at one particular time point: one RTS unit (Model: TS15I) was responsible for surveying the prism on the Yplate, and the other (Model: TCRP1203+) locked on the prism on the beam (Figure 13). The surveying results at one particular time point were converted into the sling length by Eq. (3), which was required for model scaling. Note the sling length is assumed to be the straightline distance between the two end points.
Results
Field testing results
Six models were built in the field for six particular moments in time; three moments (T_{0}, T_{1}, T_{2}) occurred when unbalanced weights were loaded near the left end of the frame, while the other three (T_{3}, T_{4}, T_{5}) represented the state of the rigging system after the unbalanced weights were removed. The models at T_{0} and T_{3} are shown in Figures 14 and 15, respectively. Each was built in PhotoModeler based on three photos taken from three perspectives, with an arbitrary scale. The two point coordinates obtained from the two synchronized RTS units were used to fix the length of sling B(L.), given in Figures 14 and 15.
Sling length measurement data for T_{0}, T_{1}, T_{2,} and for T_{3}, T_{4}, T_{5} are given in Table 1 and Table 2, respectively. The sling length changes ΔL(T_{i,}_{i+1}) between two time events, i and i+1, are summarized in Table 3. The measurement units are all in millimeters. The relatively large values of length change on A(L.) and A(R.) between any two time moments (Table 3) can be attributed to the longer distance between the sling and the center point of the rigging beam; while the significant changes on all the slings between T_{2} and T_{3} are largely caused by removing the “unbalanced” load from the rigging frame. It should be noted that a positive value in Table 3 denotes the increase of the distance between the two end points of a sling.
In the current site testing, we had only two RTS available; the problem definition is to measure the dimensional changes of the slings for a rigging system to ensure quality and safety. Hence implementing the “five point relative modeling approach” by use of two RTS is sufficient and cost effective in comparison with the “three point absolute modeling” using three RTS units. The objective of implementing the proposed technique to analyze the dimensional changes on the moving rigging system has been successfully fulfilled. The small magnitude of the sling length changes given in Table 3 indicates the rigging frame is relatively stable during lifting. In addition, the lengths of the sling assembled on site were adjusted to match the design based on Eq. (4). The proposed technique provides a simple, safe and effective solution to monitor the dimensional changes of a moving object, such as the sling length in this case study, and hence provides timely decision support making.
As the sagging effect of a loose sling is ignored in the current study, the length change measurement approximately indicates the sling length extension. In fact, the photobased approach allows the modeling of the sagging effect on the sling. The determination of the exact sling length based on the endtoend distance and the sagging effect will be investigated in the followup research.
In general, the value of residual error is determined as an indicator of the quality of photogrammetry modeling which entails manually matching feature points on the object. In the field experiments, one object point P was identified and marked in the xy frame of each photo taken by three cameras. Its coordinates in an arbitrary modeling space frame were then calculated based on the leastsquares adjustment technique by applying the “fivepoint relative modeling approach.” After the point coordinates were determined, the corresponding image point (x, y) were recalculated by using collinearity Equations (1) and (2) in the corresponding image frame. The pixel differences between the manually marked (x, y) coordinate and recalculated (x, y) coordinate are defined as the residual error (EOS Systems 2011). As a rule of thumb, if the largest residual is less than 10 pixels, the quality of a photogrammetry modeling project is high; otherwise, remodeling is needed. Note the residual errors on the models resulting from field experiments as described in this paper were controlled within 3 pixels.
To quantitatively assess the modeling accuracy of photogrammetry in construction field applications, Dai and Lu (2008) conducted case studies, in which the object dimensions ranged from 35mm to 5720mm, while the distance from the camera to the object ranged from 1m to 6m. Their experiment results showed that there was a 95% likelihood the photogrammetric modeling measurement and tape measurement would differ from −15.30 mm to +11.39 mm. A separate laboratory experiment was conducted to compare the results between the proposed technique and tape measurement. Note a particular sling’s length can differ from its original design due to errors resulting from fabrication and assembly processes. And the true measurements of the rigging system used at a particular loading state in the field testing are unknown. It was impractical to manually gauge those slings’ lengths due to large sizes of the rigging system, accessibility constraints and safety concerns. Therefore, a laboratory experiment was designed to examine the measurement error of the proposed technique against direct manual measurements based on a simplified model of the rigging system in reduced scale. In the laboratory, two cameras were set up to take photos of the mock rigging object (60 cm wide, 50 cm tall and 20 cm deep) at one particular instant. The distance between the cameras and the system was approximately 2m. It was found that the differences between photogrammetric and manual measurements were within 0.4 mm on average. As of the rigging system in the real world, the measurement error would be expected to be much larger as the size of the object and the cameraobject distance were increased by roughly tenfold. The absolute measurement error is estimated to be in the order of 5 cm. Considering the sling length is about 20 m, the relative measurement error would be approximately 0.25% on the current field testing. Nonetheless, formal systematic analysis of accuracy of the proposed methodology and the further enhancement of the accuracy through real time computing will be left for future research.
Discussion
Main contributions of this research include (1) proposing costeffective timedependent dynamic modeling methodology to measure the dimensions and check dimensional changes on a moving object of practical size, by integrating photogrammetry and robotic total station based survey techniques. Note a minimum of two moving control points on an object being modeled are required to be shown in each photo frame; (2) Elucidating on the fundamentals of photogrammetric surveying techniques instead of using commercial available software as “black box”. The algorithmic differences between the “threepoint absolute” and “fivepoint relative” modeling approaches are also discussed; (3) Providing guidelines as regarded the selection of available software and hardware, the equipment set up and the expected site constraints and the modeling errors for implementing the technique on a construction site; (4) Providing a realworld construction case study to show that valuable measurement data can be obtained and used in support of decision making in the field.
Conclusions
This research study proves the potential and the feasibility of the proposed timedependent dynamic modeling methodology based on synchronization of multiple cameras and two robotic total station units for checking the physical dimensions and changes in dimension of a moving object in the construction field. By building the photobased 3D model at one particular time point, multiple dimensions of a dynamic object can be measured in absolute units of measure in a safe, costeffective way. The methodology was successfully implemented to monitor the sling lengths of a rigging system engineered for lifting heavy modules in industrial construction. The data quality was further analyzed in terms of residual error. The measurement error is discussed based on a rigging system prototype of a reduced scale. The timedependent dynamic modeling methodology provides a costeffective solution to keep track of the physical dimensions and changes of a moving object over time. The proposed methodology would better cater to the application needs of modeling, visualization and measurement of dynamic objects critical to industrial construction.
Further research will be conducted in regard to (1) fully integrating and automating the modeling methodology using robust, reliable image processing methods such as pattern recognition and feature detection, so the manual operations of picking and matching points in photogrammetry modeling can be eliminated while the quality of modeling, which is quantified and evaluated by the residual error, is guaranteed; (2) extending synchronization of RTS units from two units to three units to track three control points on a dynamic object. The proposed dynamic modeling approach essentially follows the “fivepoint relative modeling approach” with the assistance of two synchronized RTS units at a particular moment, a scale bar is automatically fixed on the object by tracking two points. In other words, the length of the scale bar is subject to change over time. In the long run, the proposed methodology and system design can be readily extended to implement a timedependent “three point absolute modeling” approach by synchronizing three RTS units to track the absolute coordinates of the three points on the moving object. As such, the coordinates of any point of the object in the field coordinate system can be fixed in realtime. This would be valuable to materialize AR on site level and lend timely, relevant decision support to construction engineers to improve quality, safety and productivity as field operations unfold.
The major barrier of implementing three RTS units lies in the cost of RTS. The price of a robotic total station is around 80–100 times the price of a camera. Therefore, this research is intended to model a moving object based on tracking a minimum quantity of dynamic control points on the object. As such, the reliable dimensional measurements at one particular moment can be obtained with the lowest equipment cost. In the current site testing, we had only two RTS units available; the problem definition is to measure sling lengths and their changes for a rigging system in order to ensure quality and safety. Hence implementing the “five point relative modeling” by use of two RTS was considered sufficient and cost effective in comparison with the “three point absolute modeling” using three RTS units; (3) analyzing dynamic structural loading effects based on realtime measurements of sling sagging magnitude, which can be made possible through photogrammetry modeling. As the sagging effect of a loose sling is ignored in the current study, the endtoend length measurement gives only an approximation of the sling length. In fact, the photobased approach allows direct modeling of the sagging effect on the sling. The determination of the exact sling length based on the endtoend distance and the sagging effect will be investigated in the followup research.
Abbreviations
 3D:

Threedimensional
 VR:

Virtual reality
 AR:

Augmented reality
 RTS:

Robotic total station
 RTKGPS:

Realtime kinematic based global positioning system
 CAD:

Canadian dollar
 API:

Application programming interface.
References
AbdelAziz YI, Karara HM: Direct linear transformation from comparator coordinates into object space coordinates in closerange photogrammetry. Proceedings of the symposium on closerange photogrammetry 1971, 1–18.
Bhatla A, Choe S, Fierro O, Leite F: Evaluation of accuracy of asbuilt 3D modeling from photos taken by handheld digital cameras. Automation in construction 2012, 28: 116–127.
Dai F, Lu M: Assessing the accuracy of applying photogrammetry to take geometric measurements on building products. Journal of construction engineering and management 2010,136(2):242–250. 10.1061/(ASCE)CO.19437862.0000114
Dai F, Lu M: Threedimensional modeling of site elements by analytically processing image data contained in site photos. Journal of construction engineering and management 2013. in press in press
ElOmari S, Moselhi O: Integrating 3D laser scanning and photogrammetry for progress measurement of construction work. Automation in construction 2008,18(1):1–9. 10.1016/j.autcon.2008.05.006
EOS Systems: PhotoModeler – Quick start guide. Eos Systems Inc; 2011.
Fathi H, Brilakis I: A videogrammetric asbuilt data collection framework for digital fabrication of sheet metal roof panels. In Proceedings of the 19th EGICE Workshop on Intelligent Computing in Engineering, 4–6 July 2012. Herrsching, Germany; 2012.
GolparvarFard M, PeñaMora F, Savarese S: D^{4}AR A 4dimensional augmented reality model for automating construction progress data collection, processing and communication. Journal of information technology in construction 2009, 14: 129–153.
Goodrum P, Haas C, Caldas C, Zhai D, Yeiser J, Homm D: Model to predict the impact of a technology on construction productivity. Journal of construction engineering and management 2011,137(9):678–688. 10.1061/(ASCE)CO.19437862.0000328
Government of Alberta: Alberta’s energy industry: an overview 2009. Government of Alberta; 2010.
Kamat VR, Martinez JC, Fischer M, GolparvarFard M, PeñaMora F, Savarese S: Research in visualization techniques for field construction. Journal of construction engineering and management 2011,137(10):853–862. 10.1061/(ASCE)CO.19437862.0000262
Kavanagh BF: Surveying with construction applications. 7th edition. Prentice Hall; 2009.
King BA: Some considerations for the statistical testing of least squares adjustments of photogrammetric bundles. The photogrammetric record 1997,15(90):929–935. 10.1111/0031868X.00102
Leica Geosystems: Product overview – total stations. Leica Geosystems AG; 2012.
McGlone JC, Mikhail EM, Bethel JS, Mullen R: Manual of photogrammetry. 5th edition. American society for photogrammetry and remote sensing; 2004.
Mikhail EM, Bethel JS, McGlone JC: Introduction to modern photogrammetry. Wiley; 2001.
Shen XS, Lu M, Chen W: Tunnelboring machine positioning during microtunneling operations through integrating automated data collection with real time computing. Journal of construction engineering and management 2011,137(1):72–85. 10.1061/(ASCE)CO.19437862.0000250
Siu MF, Lu M: Augmenting site photos with 3D asbuilt tunnel models for construction progress visualization. In Proceedings of 9th international conference on construction applications of virtual reality (CONVR 2009), 187–196, Nov. 5–6, 2009. Sydney, Australia; 2009.
Siu MF, Lu M: Bored pile construction visualization by enhanced productionline chart and augmentedreality photos. In Proceedings of 10th international conference on construction applications of virtual reality (CONVR 2010), 165–174, Nov. 4–5, 2010. Sendai, Miyagi, Japan; 2010.
Siu MF, Lu M: Augmentedreality visualizations of bored pile construction. Ottawa, Canada: Proceedings of the 2011 CSCE annual general meeting and conference (CSCE 2011), 10 pages, Jun. 14–17, 2011; 2011.
Westover L, Olearczyk J, Hermann U, Adeeb S, Mohamed Y: Module rigging assembly dynamic finite element analysis, Proceedings of the 2011 CSCE annual general meeting and conference (CSCE 2011), Jun 6–9, 2012. Alberta, Canada: Edmonton; 2012.
Acknowledgements
The presented research is substantially funded by a National Science and Engineering Council of Canada (NSERC) Collaborative Research and Development Grant (CRDPJ 41461611). The authors are grateful to Ulrich Hermann (Manager of Construction Engineering PCL Industrial Management Inc.) for providing insight and advice on design and implementation of the field experiments. We also acknowledge Xuesong Shen and Sheng Mao for operating robotic total stations during field experiments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Siu conducted literature view, designed lab testing, participated in field testing, processed field collected data, and performed data analysis. Lu proposed the research, supervised the first author in doing research, and led a research team to design and perform field experiments. AbouRizk cosupervised the first author, advised on the design of field experiments and facilitated the implementation of field testing. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Siu, MF.F., Lu, M. & AbouRizk, S. Combining photogrammetry and robotic total stations to obtain dimensional measurements of temporary facilities in construction field. Vis. in Eng. 1, 4 (2013). https://doi.org/10.1186/2213745914
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/2213745914
Keywords
 Timedependent modeling
 Photogrammetry
 Robotic total station
 Synchronization
 Integration
 Rigging system
 Lifting frame
 Augmented reality
 Visualization