In this work, we present four different methodologies for reducing the computational effort of fatigue assessment for offshore wind turbine support structures. To test these methods, we use them to predict the total fatigue damage of several modified support structure designs based on subsets that represent a reduction of about 6–17 times the original size of the load case set. Three of the methods are able to give quite accurate predictions, with expected errors of no more than 4–8%, though there are some limitations due to the variance inherent in some of the methods.
One of the main challenges for the design of offshore wind turbines support structures is the complexity of both the structure itself and the offshore environment. This complexity means that assessing the performance of the structure requires not only the use of detailed models, but also investigating a large number of different scenarios. Specifically, with reference to the standards that the design must conform to (e.g. International Electrotechnical Commision (2009)), there are literally thousands of different design load cases (DLCs) that must be assessed for any given structure, covering both all the various environmental states one expects to encounter at a given site and all the various scenarios that the structure is likely to experience. To summarize, we need to run detailed models and we need to run them many times. For one single assessment of a design, this can be accommodated by ever improving computer hardware and increased access to computer clusters for both institutions and individuals. However, for those wishing to run either probabilistic assessments or to optimize the design (or worse still, both of these at the same time), the large number of DLCs remains an important challenge. One that should be addressed not just by improved hardware, but by improved methodology. This is the main topic of the work to be presented below.
As it stands, it is not possible to completely replace the standard assessment with something new. Rather, one seeks to approximate the results of such full assessments by a less computationally demanding procedure. If the approximation is good enough, it may then serve well as a replacement for the conventional procedure when small deviations from the true estimates (e.g., fatigue damage) are allowable. Especially in a context like optimization, simplifications leading to such small deviations are often expected and, if the size of the deviations can be estimated, one may even incorporate these as modeling errors in a probabilistic analysis. Previous work attempting to find approximate simplified assessments have encountered some success, but have tended to be very simplified (for example in terms of the types of DLCs studied), have struggled to get a sufficiently accurate approximation while also getting a sufficient decrease in analysis time or have faced a combination of these issues. One approach is to completely abandon the time domain and instead attempt to analyze the structure in the frequency domain (see e.g. van der Tempel (2006)), but this approach has its own set of issues and we will here focus on methods in the time domain.