Many concerns have been expressed regarding discrepancies between LWD (Logging-While-Drilling) and WL (Wireline) GR (natural gamma ray) responses in various wells. It has been suggested that these differences may arise from the variations in tool design that would be sensitive to the relative mixture concentration differences of Thorium, Uranium and Potassium between the University of Houston calibration test pits and the actual formations logged. Other causes have been attributed to errors in the characterization process and to the amount and accuracy of the borehole corrections algorithms applied to the measured values.
To answer these questions, we used the Monte Carlo code MCNP in combination with an in-house code, GAMRES, to simulate the response of the various gamma-ray detectors in a variety of LWD and Wireline tools. Each tool is sensitive to the Thorium, Uranium and Potassium concentrations depending on their particular geometric parameters such as NaI crystal size and thickness of the collar material surrounding the detector. To determine this sensitivity, an overall sensitivity coefficient is calculated for each tool at the time of calibration in a pit of known concentrations of Thorium, Uranium and Potassium. Under different borehole conditions (mud weight, barite mud, borehole size, etc), the tools would respond differently depending on their particular sensitivity. This modeling allows us to assess the difference in API units between the LWD and Wireline gamma-ray based on the tools geometric configuration, borehole size, Thorium, Uranium and Potassium concentrations, and mud weights both before and after applying the published borehole correction algorithms.
Results from this study indicate that significant response differences may arise, not from tool design differences, but from different borehole and/or mud conditions if appropriate corrections are not applied.