Advisory Committee Chair
Ian Knowles
Advisory Committee Members
Carmeliza Navasca
Hemant Tiwari
Min Sun
Wenzhang Huang
Document Type
Dissertation
Date of Award
2019
Degree Name by School
Doctor of Philosophy (PhD) College of Arts and Sciences
Abstract
Inverse problems arise in a wide spectrum of applications in fields ranging from engineering to scientific computation. Connected with the rise of interest in inverse problems is the development and analysis of regularization methods, such as truncated singular value decomposition (TSVD), Tikhonov regularization or iterative regularization methods (like Landerweb iterations), which are a necessity in most inverse problems due to their ill-posedness. TSVD can be used when dealing with (small) finite dimensional linear problems, but is computationally very expensive, and sometimes even unfeasible, for large scale linear problems or nonlinear problems. In such scenarios, Tikhonov regularization is an attractive alternative, but it also comes with the price of calculating an optimal value for the relevant (external) regularizing parameter, which is a non-trivial task. The best candidate in these situations turns out to be iterative regularization methods, such as Landweber type iterations. But the problem with Landweber type iterations is that the convergence rate can be arbitrarily slow. In this thesis, we propose a new iterative regularization technique to solve inverse problems, without any dependence on external parameters and thus avoiding all the difficulties associated with their involvement. To boost the convergence rate of the iterative method different descent directions are provided, depending on the source conditions, which are based on some specific a-priori knowledge about the solution. We show that this method is very robust to the presence of extreme errors in the data. In addition, we also provide a very efficient (heuristic) stopping strategy, which is essential for an iterative regularization, even in the absence of noise information. This is crucial in view of the fact that most of the regularization methods depend critically on the noise information (error norm) to determine the stopping rule, but for a real-life data, it is usually unknown. To illustrate the effectiveness and the computational efficiency of this method we apply this technique to numerically solve some classical integral inverse problems, like Fredholm or Volterra type integral equations (in particular, numerical differentiation), and compare the results with certain standard regularization methods, like Tikhonov and TSVD regularization methods.
Recommended Citation
Nayak, Abinash, "Inverse Problems, Regularization and Its Applications" (2019). All ETDs from UAB. 2563.
https://digitalcommons.library.uab.edu/etd-collection/2563