The quality of input data for geomechanical models and design has been a real issue since models exist. Publications have been written and recommendations released on this topic yet problems still remain. This article analyses the origin(s) of poor quality data and subsequent models, based upon the analysis and experience of many site investigations. It aims at providing effective solutions. A review is conducted and the most frequent causes of non-reliable data are presented. Even if laboratory testing procedures are adapted, most problems come from a poor description of the tested samples (weathering, alterations, weakness planes, pre-existing incipient fractures) and from the fact that too often the selected samples cannot provide the required or anticipated results. We insist on the adequacy between the technical objectives of a site investigation and the equipment / procedures adapted for getting data to reach the desired goals. Depending on the required data, practical aspects are addressed such as i) the type of core drilling and/or the core barrel to use, ii) the importance of geometrical information for fractures and the use of acoustic televiewer versus optical imagery, iii) the selection of the most suitable method for measuring in situ stresses depending on rock type and structural conditions, etc. Finally, we suggest a procedure to perform core sampling for classical geomechanical laboratory tests (UCS, tensile, triaxial tests). The aim is to identify abnormal data or data sets and validate the overall quality of data. To conclude, the emphasis is on the constant interactions between structural geologists and rock mechanical engineers to ensure a common understanding of the geological conditions, subsequent adapted testing and finally better geomechanical models using reliable input data.

This content is only available via PDF.
You can access this article if you purchase or spend a download.