ABSTRACT

Vortex-Induced Vibrations (VIV) is a nonlinear fluid-structure interaction (FSI) phenomenon observed when a marine riser or pipeline is placed in sea currents. During VIV, the trajectory of a given cross-section of the riser is highly complicated, e.g., never repeats itself, making the prediction of riser VIV an extremely challenging task. Recently, thanks to the rapid development of machine learning techniques, different neural network (NN) architectures have been applied to many nonlinear problems to establish the mapping relation between input and output variables by learning from a given dataset. Taking advantage of the advances in machine-learning, in the present study, we adopt two representative NN architectures in machine leaning to predict riser VIV. The first architecture is the very basic Multi-Layer Perceptron (MLP), which is the most basic paradigm of NN and is characterized as multiple fully connected layers of neurons. The second one is the Long Short-Term Memory (LSTM) which is a modification of the Recurrent Neural Network (RNN) which was designed to capture the sequential information of input variables. The input data is generated by using the CFD code FANS. A three-dimensional CFD simulation is performed by FANS for a flexible riser with uniform inflow. Flow field quantities at selected spatial points and the response of the riser were recorded within 30000 time steps. Afterward, the flow field quantities are fed into the NN architectures to predict the motions of the riser during VIV. The performances of the NN architectures are then evaluated and compared.

INTRODUCTION

Machine learning, especially deep learning (Lecun et al., 2015) which takes advantage of artificial neural networks (ANN), has proved itself a powerful and effective approach in natural language processing (NLP) (Collobert et al., 2011), image processing (Sonka et al., 2013), object detection (Zhao et al., 2019), recommender system (Cheng et al., 2016), weather forecasting (Bi et al., 2023), etc. To deal with modeling tasks with distinct intrinsic characteristics, different neural network (NN) structures have been designed to more effectively learn the different input features. In general, three fundamental types of ANN exist, i.e. multi-layer perceptrons (MLP) (Gardner and Gorling, 1998), convolutional neural networks (CNN) (Lecun and Begio, 1995), and recurrent neural networks (RNN) (Lipton et al., 2015). MLP is the most fundamental structure of ANN which features a stack of fully-connected neuron layers, and it can theoretically be applied to any machine-learning task. To effectively learn image or matrix-like data, CNN was then designed in which the convolution operation was leveraged in the abstraction of input features. RNN, on the other hand, was invented to account for the historical series of the input features, such that the information of the input series itself can be effectively extracted.

This content is only available via PDF.
You can access this article if you purchase or spend a download.