Department of Mathematics
http://hdl.handle.net/1956/1065
Fri, 27 May 2016 00:38:57 GMT2016-05-27T00:38:57ZGeneral Slit Stochastic Löwner Evolution and Conformal Field Theory
http://hdl.handle.net/1956/11975
General Slit Stochastic Löwner Evolution and Conformal Field Theory
Tochin, Alexey
Doctoral thesis
Tue, 01 Sep 2015 00:00:00 GMThttp://hdl.handle.net/1956/119752015-09-01T00:00:00ZConvergence of a Cell-Centered Finite Volume Discretization for Linear Elasticity
http://hdl.handle.net/1956/11884
Convergence of a Cell-Centered Finite Volume Discretization for Linear Elasticity
Nordbotten, Jan Martin
Journal article
We show convergence of a cell-centered finite volume discretization for linear elasticity. The discretization, termed the MPSA method, was recently proposed in the context of geological applications, where cell-centered variables are often preferred. Our analysis utilizes a hybrid variational formulation, which has previously been used to analyze finite volume discretizations for the scalar diffusion equation. The current analysis deviates significantly from the previous in three respects. First, additional stabilization leads to a more complex saddle-point problem. Second, a discrete Korn's inequality has to be established for the global discretization. Finally, robustness with respect to the Poisson ratio is analyzed. The stability and convergence results presented herein provide the first rigorous justification of the applicability of cell-centered finite volume methods to problems in linear elasticity.
Thu, 19 Nov 2015 00:00:00 GMThttp://hdl.handle.net/1956/118842015-11-19T00:00:00ZTwo-phase flow in porous media: dynamic capillarity and heterogeneous media
http://hdl.handle.net/1956/11835
Two-phase flow in porous media: dynamic capillarity and heterogeneous media
van Duijn, Cornelius J.; Cao, Xiulei; Pop, Iuliu Sorin
Journal article
We investigate a two-phase porous media flow model, in which dynamic effects are taken into account in phase pressure difference. We consider a one-dimensional heterogeneous case, with two adjacent homogeneous blocks separated by an interface. The absolute permeability is assumed constant, but different in each block. This may lead to the entrapment of the non-wetting phase (say, oil) when flowing from the coarse material into the fine material. We derive the interface conditions coupling the models in each homogeneous block. In doing so, the interface is approximated by a thin porous layer, and its thickness is then passed to zero. Such results have been obtained earlier for standard models, based on equilibrium relationship between the capillary pressure and the saturation. Then, oil is trapped until its saturation on the coarse material side of the interface exceeds an entry value. In the non-equilibrium case, the situation is different. Due to the dynamic effects, oil may still flow into the fine material even after the saturation drops under the entry point, and this flow may continue for a certain amount of time that is proportional to the non-equilibrium effects. This suggests that operating in a dynamic regime reduces the account of oil trapped at interfaces, leading to an enhanced oil recovery. Finally, we present some numerical results supporting the theoretical findings.
Tue, 11 Aug 2015 00:00:00 GMThttp://hdl.handle.net/1956/118352015-08-11T00:00:00ZAn Open-Source Toolchain for Simulation and Optimization of Aquifer-Wide CO2 Storage
http://hdl.handle.net/1956/11782
An Open-Source Toolchain for Simulation and Optimization of Aquifer-Wide CO2 Storage
Andersen, Odd; Lie, Knut-Andreas; Nilsen, Halvor Møll
Journal article
Planning and execution of large-scale, aquifer-wide CO2 storage operations will require extensive use of computer modelling and simulations. The required computational tools will vary depending on the physical characteristics of the targeted aquifer, the stage of the project, and the questions asked, which may not always be anticipated in advance. In this paper, we argue that a one-size-fits-all simulation tool for the modelling CO2 storage does not exist. Instead, we propose an integrated toolchain of computational methods that can be used in a flexible way to set up adaptive workflows. Although a complete toolchain will require computational methods at all levels of complexity, we further argue that lightweight methods play a particularly important role in addressing many of the relevant questions. We have implemented a number of such simplified methods in MRST-co2lab, a separate module to the open-source MATLAB Reservoir Simulation Toolbox (MRST). In particular, MRST-co2lab contains percolation-type methods that within seconds can identify structural traps and their catchment areas and compute spill paths and rough estimates of trapping capacity. The software also offers two-phase simulators based on vertical-equilibrium assumptions that can forecast structural, residual, and solubility trapping in a thousand-year perspective and are orders of magnitude faster than traditional 3D simulators. Herein, we apply these methods on realistic, large-scale datasets to demonstrate their capabilities and show how they can be used in combination to address optimal use of open aquifers for large-scale storage.
Thu, 04 Feb 2016 00:00:00 GMThttp://hdl.handle.net/1956/117822016-02-04T00:00:00ZUpscaling of Nonisothermal Reactive Porous Media Flow under Dominant Péclet Number: The Effect of Changing Porosity
http://hdl.handle.net/1956/11768
Upscaling of Nonisothermal Reactive Porous Media Flow under Dominant Péclet Number: The Effect of Changing Porosity
Bringedal, Carina; Berre, Inga; Pop, Iulio Sorin; Radu, Florin A.
Journal article
Motivated by rock-fluid interactions occurring in a geothermal reservoir, we present a two-dimensional pore scale model of a thin strip consisting of void space and grains, with fluid flow through the void space. Ions in the fluid are allowed to precipitate onto the grains, while minerals in the grains are allowed to dissolve into the fluid, taking into account the possible change in the aperture of the strip that these two processes cause. Temperature variations and possible effects of the temperature in both fluid density and viscosity and in the mineral precipitation and dissolution reactions are included. For the pore scale model equations, we investigate the limit as the width of the strip approaches zero, deriving one-dimensional effective equations. We assume that the convection is dominating over diffusion in the system, resulting in Taylor dispersion in the upscaled equations and a Forchheimer-type term in Darcy's law. Some numerical results where we compare the upscaled model with three simpler versions are presented: two still honoring the changing aperture of the strip but not including Taylor dispersion, and one where the aperture of the strip is fixed but contains dispersive terms.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/1956/117682016-01-01T00:00:00ZOppfatningar og misoppfatningar i sannsyn: Ein mixed methods-studie av elevar på vidaregåande trinn – yrkesfagleg retning
http://hdl.handle.net/1956/11748
Oppfatningar og misoppfatningar i sannsyn: Ein mixed methods-studie av elevar på vidaregåande trinn – yrkesfagleg retning
Handegård, Tone
Master thesis
Denne masteroppgåva handlar om korleis elevar på vidaregåande trinn – yrkesfagleg retning forstår omgrep innafor sannsyn. Føremålet har vore å avdekkja kva oppfatningar og misoppfatningar elevane har og kva løysingsstrategiar elevane nyttar når dei svarar på spørsmåla. Vidare har eg undersøkt i kva grad funna mine samsvarar med tidlegare funn som er referert i litteraturen i tillegg til å sjå nærare på korrelasjon mellom karakter og resultat.
I studien er det nytta mixed methods for datainnsamlinga. Datainnsamlinga vart gjennomført i januar og februar 2015. Det vart først utført ei kvantitativ spørjeundersøking der 93 elevar deltok. Om lag 2/3 av desse går på vg1 i ulike yrkesfaglege retningar medan om lag 1/3 går på påbygg til generell studiekompetanse. Etterpå vart det utført ein kvalitativ studie der 9 elevar vart intervjua. Ingen av elevane hadde gjennomgått opplæring i sannsyn rett før datainnsamlinga. Spørsmåla byggjer i stor grad på spørsmål frå tidlegare undersøkingar slik at dei til ei viss grad kan seiast å vera kvalitetssikra.
Alle misoppfatningane som eg ynskte å testa, synte seg å vera til stades i varierande grad hjå elevane. Dei talmessige resultata frå undersøkinga mi viser elles at det i stor grad er samanfallande med det forskarar har funne tidlegare, trass i at undersøkingane har stor variasjon i alder, skulebakgrunn og verdsdeler.
Elevane nytta ulike løysingsstrategiar som vi og finn att i litteraturen og tidlegare forsking. Dette gjeld særleg bruk utfallstilnærming og 50-50 tilnærming. Gjennom intervjua fekk eg og avdekka at elevane gjerne stolar på intuisjonen sin og det er først når dei må tenkja i gjennom kva som står i spørsmålet, at dei kan gje eit normativt riktig svar. Det er særleg innafor sannsyn at bruk av intuisjon leier oss inn i ei misoppfatning. Vi nyttar som oftast det vi har lært tidlegare.
Fri, 01 Jan 2016 00:00:00 GMThttp://hdl.handle.net/1956/117482016-01-01T00:00:00ZTwo-scale preconditioning for two-phase nonlinear flows in porous media
http://hdl.handle.net/1956/11747
Two-scale preconditioning for two-phase nonlinear flows in porous media
Skogestad, Jan Ole; Keilegavlen, Eirik; Nordbotten, Jan Martin
Journal article
Solving realistic problems related to flow in porous media to desired accuracy may be prohibitively expensive with available computing resources. Multiscale effects and nonlinearities in the governing equations are among the most important contributors to this situation. Hence, developing methods that handle these features better is essential in order to be able to solve the problems more efficiently. Focus has until recently largely been on preconditioners for linearized problems. This article proposes a two-scale nonlinear preconditioning technique for flow problems in porous media that allows for incorporating physical intuition directly in the preconditioner. By assuming a certain dominant physical process, this technique will resemble upscaling in the equilibrium limit, with the computational benefits that follow. In this study, the method is established as a preconditioner with good scalability properties for challenging problems regardless of dominant physics, thus laying the foundation for further studies with physical information in the preconditioner.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/1956/117472015-01-01T00:00:00ZTrigonometric interpolation on lattice grids
http://hdl.handle.net/1956/11732
Trigonometric interpolation on lattice grids
Sørevik, Tor; Nome, Morten
Journal article
In this paper we construct non-aliasing interpolation spaces and Lagrange functions for lattice grids. We argue that lattice grids are good for trigonometric interpolation and support this claim by numerical experiments. A greedy algorithm allows us to embed hyperbolic crosses in our interpolation spaces, and numerical experiments indicate that lattice grids are at least as good as sparse grids for trigonometric interpolation. A straightforward FFT-algorithm for functions sampled on lattice grids allows for fast computation and good approximation.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/1956/117322015-01-01T00:00:00ZWave breaking in long wave models and undular bores
http://hdl.handle.net/1956/11668
Wave breaking in long wave models and undular bores
Brun, Mats Kirkesæther
Master thesis
Bores are a well known phenomena in fluid mechanics, although
their occurrence in nature is relatively rare. The circumstances in
which they occur is usually when a tidal swell causes a difference
in surface elevation in the mouth of a river, or narrow bay, causing
long waves to propagate upstream. The term 'tidal bore' is also
frequently used in this context. Depending on the conditions the
bore may take on various forms, ranging from a smooth wavefront
followed by a smaller wave train, to one singe breaking wavefront.
Some noteworthy locations where tidal bores can be found include
the River Seine in France, the Petitcodiac River in Canada, and the
Qiantang River in China. Common for all these locations is a large
tidal range. Bores, when powerful enough, can produce particularly
unsafe environments for shipping, but at the same time popular
opportunities for river surfing.
As found by Favre in 1935 by wave tank experiments, the strength
of the bore can be determined by the ratio of the incident water
level above the undisturbed water depth to the undisturbed water
depth. Denoting this ratio by $\alpha$, bores can occur in one of
three categories: If $\alpha$ is less than 0.28 the bore is purely
undular, and will feature oscillations downstream of the bore front.
If $\alpha$ is between 0.28 and 0.75 the bore will continue to
feature oscillations, but one or more waves behind the bore front
will start to break. If $\alpha$ is greater than 0.75 the bore is
completely turbulent, and can no longer be described by the
standard potential flow theory.
The goal of this report is to simulate the time evolution of an
undular bore through numerical experiments, using a dispersive
nonlinear shallow water theory, in particular the Korteweg-deVries
(KdV) equation. This is a third order nonlinear partial differential
equation, where the dependent variable describes the
displacement of the free surface. When deriving this equation, an
expression for the velocity field of the flow is also available. This
can be calculated at any point in the fluid, as long as the
displacement of the surface is known. Thus, solving the KdV
equation also yields the fluid particle velocity field, which can be
used to calculate fluid particle trajectories, as done by Bjørkvåg
and Kalisch, but also to formulate a breaking criterion. By applying
this breaking criterion to the undular bore, the onset of breaking,
and thus also a maximum allowable wave height (due to the
nonlinearity of the model equation), can be computed numerically.
This criterion can also be applied to the exact traveling wave
solutions of the KdV equation, namely the 'solitary wave' solution
and the 'cnoidal wave' solution. These are waves of constant shape
traveling at constant velocity, thus applying the breaking criterion
yields a maximum height for which they can exist.
The theory leading to the formulation of the KdV equation is also
included, in addition to formulation of the linearized and shallow
water equations. These, however, serve only as 'stepping stones'
towards the higher order Boussinesq equations, and are not used
in any further calculations.
Fri, 18 Dec 2015 00:00:00 GMThttp://hdl.handle.net/1956/116682015-12-18T00:00:00ZSubsea pump vibration analysis for predictive maintenance
http://hdl.handle.net/1956/11666
Subsea pump vibration analysis for predictive maintenance
Rouzbahaneh, Mahdi
Master thesis
The goal of this study is to compare vibration signals of subsea pumps from two different time periods. These signals are gathered for predictive maintenance reasons. Vibration analysis is one of the most efficient tools for machine diagnostics. Vibration analysis has been an active area of research in the past few decades. In this thesis an attempt has been made to cover all the common vibration analysis methods to find more reliable results. Since a pump includes many rotating components, presence of a kind of periodicity is expected in the vibration signal. Local defects in each cycle of rotation cause an impulse in the vibration signal. Therefore high impulsivity in the vibration signal is an indicator of fault. This Impulsivity is not clearly observed in the vibration signal. The impulse response function of the transmission path deforms an impulse in the source into a spread segment in the vibration signal. Furthermore the impulses are more hidden when the period of impulses is shorter than the width of the impulse response function. Detection of impluses is more subtle due to the presence of background noise. Our methods mainly focus on these two obstacles to achieve impulsivity of source signal. These Impulses also appear as amplitude modulation on the high resonant frequency of the pump. In general, there is low signal-to-noise ratio in low frequencies. Therefore we look into demodulation in high frequencies to find impulses. Several methods were applied on the signals object to 1) Separation of deterministic and non-deterministic parts of signal. 2) Separation of source signal and transmission path function. 3) Determine the optimal resonant frequency band and envelope analysis. In addition a new method to estimate the shaft speed from the vibration signal is proposed. Two segments were chosen from the old and new vibration signals for analysis. We used Minimum Entropy Deconvolution (MED) and Cepstrum Analysis to remove the transmission path effect from the vibration signal. We also used Time Synchronous Averaging (TSA), Autoregressive Filter (AR), Discrete/Randome Separation (DRS) and Cepstrum Analysis to separate deterministic components from non-deterministic components in the vibration signal. The latter resolves the first problem and former resolves the second one. Band-Pass Filtering, Wavelet Packet Transform (WPT), Hilbert-Huang Transform (HHT) and Spectral Kurtosis (SK) were used to detect excited resonant frequency for Demodulation and Envelope Analysis. High kurtosis is a measure of high impulsivity in a signal. Therefore all of our methods rely on kurtosis. The results demonstrate that the Minimum Entropy Deconvolution is a very effective method to remove the transmission path effect such that the defect impulses of the source can be clearly observed. A combination of wavelet and HHT as an Improved-HHT is a highly reliable method to detect the excited resonant frequencies. Spectral Kurtosis is an efficient and direct detector of the excited resonant frequencies. Cepstrum Editing or Liftering is a multi-purpose method for each stage. A new kind of cepstrum editing with a satisfactory output has been performed. Order Tracking is performed before Time Synchronous Averaging and Discrete/Random Separation and in this study there is no tachometer signals to order tracking. Therefore these methods were not effective. After examination on the old signal only the 5th and 27th harmonics of the shaft speed had a high amplitude. These frequencies are indeed the natural frequencies of the pump and do not indicate a fault. On the other hand, the new signal led to many peaks. The strongest of these peaks were the shaft speed harmonics and 1/2 harmonics. Lack of information on the pump specifications disables us to relate these peaks to specific faults. But these harmonics are generally presented in mechanical looseness.
Sun, 29 Nov 2015 00:00:00 GMThttp://hdl.handle.net/1956/116662015-11-29T00:00:00ZQuantifying uncertainties when monitoring marine environments in connection with geological storage of CO2
http://hdl.handle.net/1956/11211
Quantifying uncertainties when monitoring marine environments in connection with geological storage of CO2
Hvidevold, Hilde Kristine
Doctoral thesis
Fri, 26 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/112112015-06-26T00:00:00ZA robust linearization scheme for finite volume based discretizations for simulation of two-phase flow in porous media
http://hdl.handle.net/1956/11048
A robust linearization scheme for finite volume based discretizations for simulation of two-phase flow in porous media
Radu, Adrian Florin; Nordbotten, Jan Martin; Pop, Iuliu Sorin; Kumar, Kundan
Journal article
In this work we consider a mathematical model for two-phase flow in porous media. The fluids are assumed immiscible and incompressible and the solid matrix non-deformable. The mathematical model for the two-phase flow is written in terms of the global pressure and a complementary pressure (obtained by using the Kirchhoff transformation) as primary unknowns. For the spatial discretization, finite volumes have been used (more precisely the multi-point flux approximation method) and in time the backward Euler method has been employed. We present here a new linearization scheme for the nonlinear system arising after the temporal and spatial discretization. We show that the scheme is linearly convergent. Numerical experiments are presented that sustain the theoretical results.
Tue, 01 Dec 2015 00:00:00 GMThttp://hdl.handle.net/1956/110482015-12-01T00:00:00ZNon-standard shocks in the Buckley-Leverett equation
http://hdl.handle.net/1956/11045
Non-standard shocks in the Buckley-Leverett equation
Kalisch, Henrik; Mitrovic, Darko; Nordbotten, Jan Martin
Journal article
It is shown how delta shock waves which consist of Dirac delta distributions and classical shocks can be used to construct non-monotone solutions of the Buckley–Leverett equation. These solutions are interpreted using a recent variational definition of delta shock waves in which the Rankine–Hugoniot deficit is explicitly accounted for [6]. The delta shock waves are also limits of approximate solutions constructed using a recent extension of the weak asymptotic method to complex-valued approximations [15]. Finally, it is shown how these non-standard shocks can be fitted together to construct similarity and traveling-wave solutions which are non-monotone, but still admissible in the sense that characteristics either enter or are parallel to the shock trajectories.
Sat, 01 Aug 2015 00:00:00 GMThttp://hdl.handle.net/1956/110452015-08-01T00:00:00ZA Comparison of Anthropometric Measures for Assessing the Association between Body Size and Risk of Chronic Low Back Pain: TheHUNT Study
http://hdl.handle.net/1956/11019
A Comparison of Anthropometric Measures for Assessing the Association between Body Size and Risk of Chronic Low Back Pain: TheHUNT Study
Heuch, Ingrid; Heuch, Ivar; Hagen, Knut; Zwart, John-Anker
Journal article
<p>Background: Previous work indicates that overweight and obese individuals carry an increased risk of experiencing chronic low back pain (LBP). It is not known, however, how the association with body size depends on the choice of anthropometric measure used.</p>
<p>Objective: This work compares relationships with LBP for several measures of body size. Different results may indicate underlying mechanisms for the association between body size and risk of LBP.</p>
<p>Methods: In a cohort study, baseline information was collected in the community-based HUNT2 (1995–1997) and HUNT3 (2006–2008) surveys in Norway. Participants were 10,059 women and 8725 men aged 30–69 years without LBP, and 3883 women and 2662 men with LBP at baseline. Associations with LBP at end of follow-up were assessed by generalized linear modeling, with adjustment for potential confounders.</p>
<p>Results: Relationships between waist-hip-ratio and occurrence of LBP at end of follow-up were weak and non-significant after adjustment for age, education, work status, physical activity, smoking, lipid levels and blood pressure. Positive associations with LBP at end of follow-up were all significant for body weight, BMI, waist circumference and hip circumference after similar adjustment, both in women without and with LBP at baseline, and in men without LBP at baseline. After additional mutual adjustment for anthropometric measures, the magnitude of the association with body weight increased in women without LBP at baseline (RR: 1.130 per standard deviation, 95% CI: 0.995–1.284) and in men (RR: 1.124, 95% CI 0.976–1.294), with other measures showing weak associations only.</p>
<p>Conclusion: Central adiposity is unlikely to play a major role in the etiology of LBP. Total fat mass may be one common factor underlying the associations observed. The association with body weight remaining after mutual adjustment may reflect mechanical or structural components behind the relationship between overweight or obesity and LBP.</p>
Tue, 27 Oct 2015 00:00:00 GMThttp://hdl.handle.net/1956/110192015-10-27T00:00:00ZLayout of CCS monitoring infrastructure with highest probability of detecting a footprint of a CO2 leak in a varying marine environment
http://hdl.handle.net/1956/10990
Layout of CCS monitoring infrastructure with highest probability of detecting a footprint of a CO2 leak in a varying marine environment
Hvidevold, Hilde Kristine; Alendal, Guttorm; Johannessen, Truls; Ali, Alfatih Omer Mohammed Ahmed; Mannseth, Trond; Avlesen, Helge
Journal article
Monitoring of the marine environment for indications of a leak, or precursors of a leak, will be an intrinsic part of any subsea CO2 storage projects. A real challenge will be quantification of the probability of a given monitoring program to detect a leak and to design the program accordingly. The task complicates by the number of pathways to the surface, difficulties to estimate probabilities of leaks and fluxes, and predicting the fluctuating footprint of a leak. The objective is to present a procedure for optimizing the layout of a fixed array of chemical sensors on the seafloor, using the probability of detecting a leak as metric. A synthetic map from the North Sea is used as a basis for probable leakage points, while the spatial footprint is based on results from a General Circulation Model. Compared to an equally spaced array the probability of detecting a leak can be nearly doubled by an optimal placement of the available sensors. It is not necessarily best to place the first in the location of the highest probable leakage point, one sensor can monitor several potential leakage points. The need for a thorough baseline in order to reduce the detection threshold is shown.
Sat, 04 Apr 2015 00:00:00 GMThttp://hdl.handle.net/1956/109902015-04-04T00:00:00ZHeat Transfer Upscaling in Geothermal Reservoirs
http://hdl.handle.net/1956/10977
Heat Transfer Upscaling in Geothermal Reservoirs
Varzina, Anna
Master thesis
In this thesis we implement a numerical model of
heat transfer in geothermal reservoirs. We
use existing pressure and flow transport solvers
as a starting point to investigate discretization
techniques for a convection-conduction temperature
equation. Then we develop and analyse two
different heat transfer solvers: explicit and
implicit, that have different accuracy and
convergence requirements. For the convective part
of the energy equation the upwind scheme is
implemented and the two-point flux approximation
is used to discretize the conductive term. Usually
heat transfer simulations require large
computational time due to high resolution on a
fine scale. For efficient computation we
investigate flow-based upgridding techniques,
which were used before for fluid transport in
porous media. However upgridding and upscaling
can lead to less accurate results due to much loss
of details in a discrete model. We compare
solutions on different types of grids such as
Cartesian grid and flow-based grids that are
generated according to various indicators like
permeability, velocity, time-of-flight and thermal
conductivity. In this work we simulate an initial-
boundary value problem with a heat flow through
boundaries and try to investigate, which coarse
grid leads to the most accurate results when
solving the energy equation.
Fri, 20 Nov 2015 00:00:00 GMThttp://hdl.handle.net/1956/109772015-11-20T00:00:00ZDesign of a monitoring program in a varying marine environment
http://hdl.handle.net/1956/10952
Design of a monitoring program in a varying marine environment
Frøysa, Håvard Guldbrandsen
Master thesis
In this thesis we develop methods for optimal design of a monitoring program
for offshore geological CO2 storage. The goal is to find the layout of fixed
chemical sensors at the seafloor that maximizes the probability of detecting a
leakage. Numerical simulations of leakage scenarios are used as origin to predict
the regions that sensors monitor. Based on leakage scenarios, this gives the
detection probability. All methods are tested using test cases. The methods
could be applied to other problems involving monitoring of potential pollutants
into the ocean. The main results are inclusion of spatial variability in the
estimated leakage footprint and an exact inversion of the resulting footprint.
Mon, 14 Sep 2015 00:00:00 GMThttp://hdl.handle.net/1956/109522015-09-14T00:00:00ZA robust implicit scheme for two-phase flow in porous media
http://hdl.handle.net/1956/10951
A robust implicit scheme for two-phase flow in porous media
Kvashchuk, Anna
Master thesis
In this thesis we present a new implicit scheme
for the numerical simulation of two-phase flow
in porous media.
Linear finite elements are considered for the
spatial discretization.
The scheme is based on the iterative IMPES
approach and treats the capillary pressure term
implicitly to ensure stability. Under
assumption of smoothness of the capillary
pressure and the phase mobility curves, we were
able to prove convergence theorem for the
scheme.
Two dimensional numerical simulations
furthermore verify the convergence.
To illustrate the potential of the new scheme
we compare its computational efficiency to our
implementation of two other common approaches
to the problem:
IMPES and the fully implicit formulation solved
by Newton's method. The advantage of our scheme
over IMPES is improved stability for larger
time-step.
At the same time, it is cheaper in terms of
computational costs and memory requirements
compared to the Newton method.
Thu, 26 Nov 2015 00:00:00 GMThttp://hdl.handle.net/1956/109512015-11-26T00:00:00ZAn Approach for Investigation of Geochemical Rock-Fluid Interactions
http://hdl.handle.net/1956/10903
An Approach for Investigation of Geochemical Rock-Fluid Interactions
Bringedal, Carina; Berre, Inga; Radu, Florin A.
Conference object
<p>Geochemistry has a substantial impact in exploiting of geothermal systems. When water is injected in a geothermal reservoir, the injected water and in-situ brine have different temperatures and chemical compositions and will flow through highly heterogeneous regions where the rock has varying chemical properties and where temperature and flow regimes can alter significantly.</p>
<p>As a consequence of flow and geochemical reactions, composition of reservoir fluids as well as reservoir rock properties will develop dynamically with time. Minerals dissolving and precipitating onto the reservoir matrix, can change the porosity and hence the permeability of the system substantially. Mineral solubility can change by the cooling of the rock by the injected water as well as by the injected water having a different salt content than the in-situ brine. The interaction between altering temperature, solute transport with mineral dissolution and precipitation, and fluid flow is highly coupled and challenging to model appropriately as the relevant physical processes jointly affect each other. The effect of changing porosity through the production period of the geothermal reservoir, may have severe impacts on operating conditions, as pores may close and block flow paths, or new pores may open to create enhanced flow conditions.</p>
<p>We propose an approach that models all three processes on a relevant time scale. The considered mathematical and corresponding overall numerical solution strategy enables us to investigate the coupling between flow, geochemical and thermal effects, as well as to develop tailored numerical approaches.</p>
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/109032014-01-01T00:00:00ZInfluence of natural convection in a porous medium when producing from borehole heat exchangers
http://hdl.handle.net/1956/10902
Influence of natural convection in a porous medium when producing from borehole heat exchangers
Bringedal, Carina; Berre, Inga; Nordbotten, Jan Martin
Journal article
Convection currents in a porous medium form when the medium is subject to sufficient heating from below (or equivalently, cooling from above) or when cooled or heated from the side. In the context of geothermal energy extraction, we are interested in how the convection currents transport heat when a sealed borehole containing cold fluid extracts heat from the porous medium; also known as a borehole heat exchanger. Using pseudospectral methods together with domain decomposition, we consider two scenarios for heat extraction from a borehole; one system where the porous medium is initialized with constant temperature in the vertical direction and one system initialized with a vertical temperature gradient. We find the convection currents to have a positive effect on the heat extraction for the case with a constant initial temperature in the porous medium, and a negative effect for some of the systems with an initial temperature gradient in the porous medium: Convection gives a negative effect when the borehole temperature is close the initial temperature in the porous medium, but gradually provides a positive effect if the borehole temperature is decreased and the Rayleigh number is larger.
Thu, 01 Aug 2013 00:00:00 GMThttp://hdl.handle.net/1956/109022013-08-01T00:00:00ZLinear and nonlinear convection in porous media between coaxial cylinders
http://hdl.handle.net/1956/10901
Linear and nonlinear convection in porous media between coaxial cylinders
Bringedal, Carina; Berre, Inga; Nordbotten, Jan Martin; Rees, D. Andrew S.
Journal article
We uncover novel features of three-dimensional natural convection in porous media by investigating convection in an annular porous cavity contained between two vertical coaxial cylinders. The investigations are made using a linear stability analysis, together with high-order numerical simulations using pseudospectral methods to model the nonlinear regime. The onset of convection cells and their preferred planform are studied, and the stability of the modes with respect to different types of perturbation is investigated. We also examine how variations in the Rayleigh number affect the convection modes and their stability regimes. Compared with previously published data, we show how the problem inherits an increased complexity regarding which modes will be obtained. Some stable secondary modes or mixed modes have been identified and some overlapping stability regions for different convective modes are determined.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/109012011-01-01T00:00:00ZModeling of heat transfer in porous media in the context of geothermal energy extraction
http://hdl.handle.net/1956/10900
Modeling of heat transfer in porous media in the context of geothermal energy extraction
Bringedal, Carina
Doctoral thesis
<p>This dissertation concerns mathematical modeling related to heat transfer in the subsurface relevant for geothermal energy extraction. Two applications are considered: First, natural convection in a homogeneous porous medium around a borehole is studied. Secondly, pore scale models for non-isothermal reactive transport with changing porosity are formulated and upscaled.</p>
<p>Natural convection is the heat transfer due to currents arising from density differences in the fluid. A vertical borehole heat exchanger not injecting fluid into the subsurface, may encounter convection currents in the surrounding porous medium when the porous medium is saturated with water. These convection currents can affect the heat transfer into the borehole, hence have an influence on the heat production from the borehole. To address this issue, both a theoretical framework and a numerical approach for the natural convection is considered. In the theoretical framework a linear stability analysis is applied to quantify when and how convection currents may occur spontaneously without the presence of a producing borehole heat exchanger. A high-order numerical scheme is implemented to quantify the effect on an operating borehole. The linear stability analysis is performed for an idealized setting with a homogeneous porous medium filling an annular cylinder that is heated from below and cooled from above. The analysis provides criterions for onset of convection and the associated pattern for convection currents in the linearized case. As the onset criterion and the convection pattern that appears depend on the size of the annular cylinder, maps describing the onset criterion and the convection pattern as a function of the inner and outer radius of the annulus are created.</p>
<p>To investigate the non-linear regime of natural convection, pseudospectral methods are applied to discretize the non-linear model equations. Pseudospectral methods are highorder numerical schemes known for their exponential convergence rate. The convergence rate for our model problem is investigated, and the scheme is used to examine the stability of the convection currents found in the linear stability analysis. The linear and non-linear regimes are found to overlap when the convection is weak, while stronger convection introduces non-linear artifacts only visible through the simulations. Further, by applying a more realistic configuration for the geothermal reservoir, pseudospectral methods are used in combination with domain decomposition to estimate the effect from natural convection on an operating borehole. By varying the flow properties and temperature conditions, some cases where the presence of convection would lower the amount of heat that is produced by the borehole are found, while other cases give a positive effect on the heat production.</p>
<p>When geothermal energy is produced through injecting fluid into the subsurface and producing warmer fluid, other challenges arise. The in-situ groundwater and the injected water will typically have different temperature and ion content, hence the injection shifts the geochemical system from its original state of equilibrium. We focus on mineral precipitation and dissolution reactions induced by this shift. As most minerals have temperature-dependent solubilities, chemical reactions may be triggered by both the temperature change as well as the variations in ion content. Due to the rockfluid interactions mineral precipitation and dissolution entail, the porosity and hence the permeability of the porous medium can be changed, which again alters the production conditions for the geothermal reservoir. As the fluid flow, heat transport and reactive transport affect each other and involves processes over several time scales, this problem is highly coupled and challenging to model. This coupling is illustrated and used to motivate the need for pore scale models in order to better understand how the interactions between the physical processes behave at the pore scale. Further, upscaling through homogenization is performed on three different pore scale models, each applying different assumptions on the pore scale geometry or on the underlying physics. The upscaling process isolates the average behavior of the pore scale effects and shows how the processes are coupled at Darcy scale. One of the upscaled models is illustrated by implementing it using a finite volume scheme and is compared with some simpler models to illustrate when the upscaled model should be used and when it can be disregarded for a simpler model not honoring all the pore scale effects. Finite volume methods are conservative schemes, which are applied when consistent formulation of the transportation mechanisms is important.</p>
Fri, 02 Oct 2015 00:00:00 GMThttp://hdl.handle.net/1956/109002015-10-02T00:00:00ZThe Whitham Equation as a model for surface water waves
http://hdl.handle.net/1956/10890
The Whitham Equation as a model for surface water waves
Moldabayev, Daulet; Kalisch, Henrik; Dutykh, Denys
Journal article
The Whitham equation was proposed as an alternate model equation for the simplified description of uni-directional wave motion at the surface of an inviscid fluid. As the Whitham equation incorporates the full linear dispersion relation of the water wave problem, it is thought to provide a more faithful description of shorter waves of small amplitude than traditional long wave models such as the KdV equation.
In this work, we identify a scaling regime in which the Whitham equation can be derived from the Hamiltonian theory of surface water waves. A Hamiltonian system of Whitham type allowing for two-way wave propagation is also derived. The Whitham equation is integrated numerically, and it is shown that the equation gives a close approximation of inviscid free surface dynamics as described by the Euler equations. The performance of the Whitham equation as a model for free surface dynamics is also compared to different free surface models: the KdV equation, the BBM equation, and the Padé (2,2) model. It is found that in a wide parameter range of amplitudes and wavelengths, the Whitham equation performs on par with or better than the three considered models.
Sat, 01 Aug 2015 00:00:00 GMThttp://hdl.handle.net/1956/108902015-08-01T00:00:00ZMatroider og lineære koder
http://hdl.handle.net/1956/10780
Matroider og lineære koder
Larsen, Ann-Hege
Master thesis
Wed, 01 Jun 2005 00:00:00 GMThttp://hdl.handle.net/1956/107802005-06-01T00:00:00ZAlgebra i første klasse
http://hdl.handle.net/1956/10681
Algebra i første klasse
Lid, Elin Røkeberg
Master thesis
Oppgaven handler om algebra i første klasse. Hva er tidlig
algebra og hvordan kan vi undervise på en slik måte at vi
utvikler elevenes algebraiske tenkning? Tidlig algebra er ikke at
vi underviser skolealgebra tidligere, men det er en ny måte å
forholde seg til matematikkundervisning i småskolen på. Jeg har
gått gjennom eksisterende forskning og funnet konkrete
anbefalinger og undervisningsforløp. Dessuten har jeg tilpasset
andres opplegg og utviklet egne. Oppgaven inneholder dermed
anbefalinger for og eksempler på hvordan man kan undervise
algebraisk tenkning i første klasse.
Mon, 12 Oct 2015 00:00:00 GMThttp://hdl.handle.net/1956/106812015-10-12T00:00:00ZEnsemble Methods of Data Assimilation in Porous Media Flow for Non-Gaussian Prior Probability Density
http://hdl.handle.net/1956/10607
Ensemble Methods of Data Assimilation in Porous Media Flow for Non-Gaussian Prior Probability Density
Zhang, Yanhui
Doctoral thesis
<p>Ensemble-based data-assimilation methods have gained fast growth since the ensemble Kalman filter (EnKF) was introduced into the petroleum engineering. Many techniques have been developed to overcome the inherent disadvantages of the ensemble-based methods including the assumptions of linearity and Gaussianity, to make it more robust and reliable for practical reservoir applications. The current trend in petroleum reservoir history matching is towards taking into account more realistic reservoir models with complex geology. Geologic facies modeling plays an important role in the reservoir characterization as a way to reproduce important patterns of heterogeneity in petroleum reservoirs and to facilitate the modeling of petrophysical properties of reservoir rocks. Because the static data and general geological knowledge are almost always not sufficient to determine the distribution of geologic facies uniquely, it is advantageous to assimilate dynamic data to reduce the uncertainty.</p>
<p>The history-matching problem of geologic facies involves large nonlinearity and non-Gaussianity. When the ensemble Kalman filter (EnKF) or related ensemble-based methods are used to assimilate data in a straightforward way, the updated model variables would lose the geological realism and be contaminated with noise. Therefore, it is necessary to develop some effective measures to adapt the ensemble-based dataassimilation methods to the facies problems. This thesis is motivated by this necessity and explores the ways to conditioning geologic facies to production data using ensemble-based methods. The focus is placed on the post-processing approach.</p>
<p>By modifying the standard randomized maximum likelihood algorithm to accommodate non-Gaussian problems, we introduce a methodology that consists of a straightforward implementation of ensemble-based data assimilation in the first place and a sequential optimization procedure without iteration as a follow-up. In a similar manner, we develop another method for the post-processing of the updated reservoir models after data assimilation using ensemble-based methods. In the post-processing step, the objective function is composed of a weighted quadratic term measuring the distance to the posterior realizations, and a penalty term forcing model variables to take discrete values. A special emphasis is put on the investigation of the importance of the correlation information among the updated model variables introduced by the data which is usually ignored for the probability map approach. All of the proposed methodologies are evaluated via numerical experiments and demonstrated their utilities for improving the assimilation of data to geological facies models.</p>
<p>In this thesis, we also investigate the application of discrete curvelet transform in the denoising problem of updated model variables by the direct use of ensemble-based data-assimilation methods. According to the numerical experiments, the results show that curvelets are useful for denoising in the problem concerned but lose data match unless the covariance is included.</p>
Fri, 28 Aug 2015 00:00:00 GMThttp://hdl.handle.net/1956/106072015-08-28T00:00:00ZModels for CO2 injection with coupled thermal processes
http://hdl.handle.net/1956/10591
Models for CO2 injection with coupled thermal processes
Kometa, Bawfeh Kingsley; Gasda, Sarah Eileen; Aavatsmark, Ivar
Journal article
Large-scale models of carbon dioxide (CO2) storage in geological formations must capture the relevant physical, chemical and thermodynamic processes that affect the migration and ultimate fate of injected CO2. These processes should be modeled over the appropriate length and time scales. Some important mechanisms include convection- driven dissolution, caprock roughness, and local capillary effects, all of which can impact the direction and speed of the plume as well as long-term trapping efficiency. In addition, CO2 can be injected at a different temperature than reservoir conditions, leading to significant density variation within the plume over space and time. This impacts buoyancy and migration patterns, which becomes particularly important for injection sites with temperature and pressure conditions near the critical point. Therefore, coupling thermal processes with fluid flow should be considered in order to correctly capture plume migration and trapping within the reservoir. This study focuses on compositional non- isothermal flow using 3D and vertically upscaled models. The model concept is demonstrated on simple systems. In addition, we explore CO2 thermodynamic models for reliable prediction of density under different injection pressures, temperatures and composition.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/105912014-01-01T00:00:00ZAnalytical Methods for Upscaling of Fractured Geological Reservoirs
http://hdl.handle.net/1956/10558
Analytical Methods for Upscaling of Fractured Geological Reservoirs
Sævik, Pål Næverlid
Doctoral thesis
<p>Numerical simulations have become an essential tool for planning well operations in the subsurface, whether the application is petroleum production, geothermal energy, groundwater utilization or waste disposal and leakage. To represent a complicated heterogeneous subsurface reservoir in a simulator model, material properties within each part of the reservoir are replaced by effective properties through upscaling. In this thesis, analytical upscaling methods for fractured reservoirs are studied, with special emphasis on effective medium methods. These methods have been suggested for fracture upscaling by a number of authors, but reliable analytical error estimates and comparisons with comprehensive numerical simulations have been lacking.</p>
<p>The work in this thesis evaluates the accuracy of effective medium methods by providing an extensive comparison between analytical and numerical estimates, for both isotropic and anisotropic three-dimensional fracture configurations. The estimates are also analyzed with respect to known theoretical results, such as rigorous upper and lower bounds, asymptotic behavior and percolation properties.</p>
<p>It is found that one of the effective medium variants, the asymmetric self-consistent method, has the correct asymptotic behavior, satisfy all analytical bounds, and agree well with numerical percolation results. This is somewhat surprising, since the method has been regarded in the literature as having the weakest theoretical foundation. One explanation for the good results may be the special geometry that a fracture/matrix system represents, which agree well with the way the method is defined and derived.</p>
<p>As a part of the work in this thesis, effective medium formulations that are numerically stable for arbitrarily thin inclusions are developed. The formulations show good convergence properties in the general anisotropic case, and explicit expressions for the isotropic and slightly anisotropic case are also given. In the case of very thin inclusions, the new formulations allows the number of input parameters to be reduced.</p>
<p>Finally, the thesis investigates the use of assisted history matching for fractured reservoirs. It is shown that history matching of upscaled models may generate parameter distributions that are inconsistent with the underlying fracture description, result- ing in unphysical connectivity estimates for the fracture network. The problem can be avoided by history matching the fracture parameters directly, and include fracture upscaling as an integral part of the parameter inversion framework. This adds to the computational cost, but the additional computational effort is negligible if analytical fracture upscaling is used.</p>
Fri, 16 Oct 2015 00:00:00 GMThttp://hdl.handle.net/1956/105582015-10-16T00:00:00ZVertically averaged equations with variable density for CO2 flow in porous media
http://hdl.handle.net/1956/10548
Vertically averaged equations with variable density for CO2 flow in porous media
Andersen, Odd; Gasda, Sarah Eileen; Nilsen, Halvor Møll
Journal article
Carbon capture and storage has been proposed as a viable option to reduce CO 2 emissions. Geological storage of CO 2 where the gas is injected into geological formations for practically indefinite storage, is an integral part of this strategy. Mathematical models and numerical simulations are important tools to better understand the processes taking place underground during and after injection. Due to the very large spatial and temporal scales involved, commercial 3D-based simulators for the petroleum industry quickly become impractical for answering questions related to the long-term fate of injected CO 2 . There is an interest in developing simplified modeling tools that are effective for this type of problem. One approach investigated in recent years is the use of upscaled models based on the assumption of vertical equilibrium (VE). Under this assumption, the simulation problem is essentially reduced from 3D to 2D, allowing much larger models to be considered at the same computational cost. So far, most work on VE models for CO 2 storage has either assumed incompressible CO 2 or only permitted lateral variations in CO 2 density (semi-compressible). In the present work, we propose a way to fully include variable CO 2 density within the VE framework, making it possible to also model vertical density changes. We derive the fine-scale and upscaled equations involved and investigate the resulting effects. In addition, we compare incompressible, semi-compressible, and fully compressible CO 2 flow for some model scenarios, using an in-house, fully-implicit numerical code based on automatic differentiation, implemented using the MATLAB reservoir simulation toolkit.
Sun, 01 Mar 2015 00:00:00 GMThttp://hdl.handle.net/1956/105482015-03-01T00:00:00ZInternational Conference on Greenhouse Gas Technologies (GHGT-12) Models for CO2 injection with coupled thermal proce
http://hdl.handle.net/1956/10534
International Conference on Greenhouse Gas Technologies (GHGT-12) Models for CO2 injection with coupled thermal proce
Kometa, Bawfeh Kingsley; Gasda, Sarah Eileen; Aavatsmark, Ivar
Journal article
Large-scale models of carbon dioxide (CO2) storage in geological formations must capture the relevant physical, chemical and thermodynamic processes that affect the migration and ultimate fate of injected CO2. These processes should be modeled over the appropriate length and time scales. Some important mechanisms include convection- driven dissolution, caprock roughness, and local capillary effects, all of which can impact the direction and speed of the plume as well as long-term trapping efficiency. In addition, CO2 can be injected at a different temperature than reservoir conditions, leading to significant density variation within the plume over space and time. This impacts buoyancy and migration patterns, which becomes particularly important for injection sites with temperature and pressure conditions near the critical point. Therefore, coupling thermal processes with fluid flow should be considered in order to correctly capture plume migration and trapping within the reservoir. This study focuses on compositional non- isothermal flow using 3D and vertically upscaled models. The model concept is demonstrated on simple systems. In addition, we explore CO2 thermodynamic models for reliable prediction of density under different injection pressures, temperatures and composition.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/105342014-01-01T00:00:00ZGood ethics or political and cultural censoring in science?
http://hdl.handle.net/1956/10486
Good ethics or political and cultural censoring in science?
Torrissen, Ole; Glover, Kevin; Haug, tore; Misund, Ole Arve; Skaug, Hans J.; Kaiser, Matthias
Journal article
Peer-reviewed journals are the cornerstones to communicating scientific results. They play a crucial role in quality assurance through the review process, but they also create opportunities for discussions in the scientific community on the implications of the results or validation of methods and data. This requires that journals adhere to commonly accepted scientific standards and are open about their editorial policy. Norwegian scientists experience problems in getting research on minke whales accepted for publication where the data have been collected in association with commercial whaling. The journal Biology Letters refuses to publish papers based on data from the Norwegian whale register while publically claiming a sole focus on scientific quality. Although there are good arguments for claiming that clearly unethical research should not be rewarded with scientific publications, one also has to realize that some fields of research are beset with unresolved ethical and cultural debates. In these cases, it is to the benefit of the progress of science, and indeed society, to be open about the issues and support arguments through scientific studies. Political or cultural censoring of scientific information will in any case jeopardize the role of journals in quality assurance of scientific research and undermine the credibility of science as a supplier of objective and reliable knowledge.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/104862012-01-01T00:00:00ZNumerical computation of a finite amplitude sound beam
http://hdl.handle.net/1956/10468
Numerical computation of a finite amplitude sound beam
Berntsen, Jarle; Vefring, Erlend Heggelund
Working paper
Wed, 01 Jan 1986 00:00:00 GMThttp://hdl.handle.net/1956/104681986-01-01T00:00:00ZSolution strategies for nonlinear conservation laws
http://hdl.handle.net/1956/10433
Solution strategies for nonlinear conservation laws
Skogestad, Jan Ole
Doctoral thesis
<p>Nonlinear conservation laws form the basis for models for a wide range of physical phenomena. Finding an optimal strategy for solving these problems can be challenging, and a good strategy for one problem may fail spectacularly for others. As different problems have different challenging features, exploiting knowledge about the problem structure is a key factor in achieving an efficient solution strategy.</p>
<p>Most strategies found in literature for solving nonlinear problems involve a linearization step, usually using Newton's method, which replaces the original nonlinear problem by an iteration process consisting of a series of linear problems. A large effort is then spent on finding a good strategy for solving these linear problems. This involves choosing suitable preconditioners and linear solvers. This approach is in many cases a good choice and a multitude of different methods have been developed.</p>
<p>However, the linearization step to some degree involves a loss of information about the original problem. This is not necessarily critical, but in many cases the structure of the nonlinear problem can be exploited to a larger extent than what is possible when working solely on the linearized problem. This may involve knowledge about dominating physical processes and specifically on whether a process is near equilibrium.</p>
<p>By using nonlinear preconditioning techniques developed in recent years, certain attractive features such as automatic localization of computations to parts of the problem domain with the highest degree of nonlinearities arise. In the present work, these methods are further refined to obtain a framework for nonlinear preconditioning that also takes into account equilibrium information. This framework is developed mainly in the context of porous media, but in a general manner, allowing for application to a wide range of problems. A scalability study shows that the method is scalable for challenging two-phase flow problems. It is also demonstrated for nonlinear elasticity problems.</p>
<p>Some models arising from nonlinear conservation laws are best solved using completely different strategies than the approach outlined above. One such example can be found in the field of surface gravity waves. For special types of nonlinear waves, such as solitary waves and undular bores, the well-known Korteweg-de Vries (KdV) equation has been shown to be a suitable model. This equation has many interesting properties not typical of nonlinear equations which may be exploited in the solver, and strategies usually reserved to linear problems may be applied. In this work includes a comparative study of two discretization methods with highly different properties for this equation.</p>
Fri, 21 Aug 2015 00:00:00 GMThttp://hdl.handle.net/1956/104332015-08-21T00:00:00ZDomain decomposition preconditioning for non-linear elasticity problems
http://hdl.handle.net/1956/10432
Domain decomposition preconditioning for non-linear elasticity problems
Keilegavlen, Eirik; Skogestad, Jan Ole; Nordbotten, Jan Martin
Conference object
We consider domain decomposition techniques for a non-linear elasticity problem. Our main focus is on non-linear preconditioning, realized in the framework of additive Schwarz preconditioned inexact Newton (ASPIN) methods. The standard 1-level ASPIN method is extended to a 2-level method by adding a non-linear coarse solver. Numerical experiments show that the coarse component is necessary for scalability in terms of linear iterations inside the Newton loop. Moreover, for problems that are dominated by nonlinearities that are not localized in space the non-linear coarse iterations are crucial for achieving computational efficiency.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/104322014-01-01T00:00:00ZModellering og estimering av romlig avhengighet i forsikring
http://hdl.handle.net/1956/10416
Modellering og estimering av romlig avhengighet i forsikring
Sellereite, Nikolai
Master thesis
De seneste årene har det blitt publisert flere studier hvor bayesianske hierarkiske modeller, med gitte spatiale avhengighetsstrukturer, blir foreslått som potensielle verktøy i forsikringsselskapers arbeid rettet mot geografisk prisdifferensiering. I denne oppgaven blir problemstillingen angrepet fra et frekventisk ståsted, hvor fokuset er begrenset til modeller for antall skader. Skadestørrelse, levetid o.l. er andre eksempler på responsvariabler hvor det teoretiske rammeverket kan anvendes. Parameterestimeringen blir utført ved hjelp av maksimum likelihood, og som følge av høydimensjonale integral introduseres Laplace-approksimasjon og automatisk derivasjon, hvor estimeringsprosedyren automatiseres ved hjelp av pakken Template Model Builder (TMB). Den latente spatiale effekten modelleres som Gaussian Markov random fields (GMRFs) med ulike valg av spatial avhengighetsstruktur. Modeller med og uten latente spatiale variabler tilpasses en simulert forsikringsportefølje, hvor modellene uten latente spatiale variabler tilsvarer generaliserte lineære modeller. Valideringen av den prediktive evnen til modellene blir utført ved å simulere 1000 nye forsikringsporteføljer.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/104162015-06-01T00:00:00ZMønstre og kvalitet. Analyse av samtaler i matematikkundervisning
http://hdl.handle.net/1956/10360
Mønstre og kvalitet. Analyse av samtaler i matematikkundervisning
Mjaavatn, Geir
Master thesis
Oppgaven diskuterer hvordan man kan analysere samtaler mellom lærer og elever i matematikkundervisning. Deretter diskuterer oppgaven hva som kjennetegner en kvalitativ god samtale.
Wed, 03 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/103602015-06-03T00:00:00ZStatistiske modelleringer i det europeiske gassmarkedet
http://hdl.handle.net/1956/10346
Statistiske modelleringer i det europeiske gassmarkedet
Frontéri, Anette Ullestad
Master thesis
I denne oppgaven studerer vi naturgassmarkedet
i Europa. Først ser vi på hvilke forhold som
påvirker gassprisen. Deretter studerer vi
statistisk teori om tidsrekker, volatilitet og
copula. Til slutt blir det gjort statistiske
modelleringer på naturgasspriser i tre
europeiske gassmarkeder. Naturgassprisene blir
omformet til log-avkastninger. GARCH-modeller
blir brukt for å modellere volatiliteten i log-
avkastningene. Copula blir brukt til å
modellere avhengighetsstrukturen, som er mellom
de marginale fordelingene i de europeiske
gassmarkedene. Til slutt blir det brukt
kointegrasjon for å beskrive sammenhengen
mellom de europeiske gassprisene.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/103462015-06-01T00:00:00ZNatural Convection in Layered Porous Media between Coaxial Cylinders
http://hdl.handle.net/1956/10336
Natural Convection in Layered Porous Media between Coaxial Cylinders
Selheim, Bianca
Master thesis
In this thesis a mathematical model is developed to describe the onset of natural convection in a two-layer porous medium located between two coaxial cylinders, motivated by natural convection processes in geothermal systems.
The cylinders are heated from below and cooled from above. We consider the top and bottom to be impermeable and perfectly heat conducting, while
the sidewalls are assumed to be impermeable and insulated. At the interface between the layers, we require continuity in temperature, pressure, vertical flow and heat flow.
We apply linear stability analysis to determine the criterion for onset of natural convection in the bottom layer. An investigation of the effects of permeability contrasts between the two layers on the critical Rayleigh number is performed. We present new results, as our analysis applies to a two-layer medium in cylindrical coordinates. The results are validated by comparison with similar previous studies for a single-layer medium with the same geometry, and for a layered medium within a box geometry. The model has real
life applications as it may work as an indicator of the presence of natural convection in the subsurface and since the results can be applied in benchmarking of a numerical simulator.
Sun, 31 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/103362015-05-31T00:00:00ZDetecting atmospheric rivers using persistent homology
http://hdl.handle.net/1956/10335
Detecting atmospheric rivers using persistent homology
Alfsvåg, Kristian Stusdal
Master thesis
This master's thesis is a first investigation on the problem of seeing whether it is possible to detect atmospheric rivers using persistent homology. Two different computations are done and a basic analysis is made. In addition an implementation of persistent homology made during the thesis is described.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/103352015-06-01T00:00:00ZLogarithmic Hochschild homology
http://hdl.handle.net/1956/10333
Logarithmic Hochschild homology
Piceghello, Stefano
Master thesis
The purpose of this thesis is to analyse the logarithmic Hochschild homology for pre-log rings and to provide some tools to compute it in certain cases. One of the main strategies that we will employ to describe the log Hochschild homology will entail passing through the module of the log Kahler differentials. An additional technique that we can adopt to gather information about the log Hochschild homology of some specific pre-log rings is to interlock it in a long exact sequence, relating it to the ordinary Hochschild homology groups. An example in which this method applies nicely is the case where the remaining terms of the long exact sequence are the Hochschild homology of polynomial algebras in a finite number of variables, for which we try to provide an exhaustive description.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/103332015-06-01T00:00:00ZNumerical Modelling of Microbial Enhanced Oil Recovery with Focus on Dynamic Effects: An Iterative Approach
http://hdl.handle.net/1956/10262
Numerical Modelling of Microbial Enhanced Oil Recovery with Focus on Dynamic Effects: An Iterative Approach
Skiftestad, Kai
Master thesis
Recovering more of the available oil has been a main driver behind the extensive work done in the field of enhanced oil recovery (EOR) over the last decades. Microbial en- hanced oil recovery (MEOR) has been heavily researched, and is picking up pace com- pared with other EOR methods used today. MEOR is economically attractive and has a huge potential if applied in accordance to reservoir conditions. This thesis considers a two-phase flow regime in homogeneous porous media, under the influence of microbial activity. The mathematical model includes the concept of dynamic capillary pressure, and is based on Darcy's law, the principle of mass conservation and the diffusion/dispersion-advection equation. The inclusion of the dynamic capillary pressure makes this model classified as a so-called non-standard model. In this work we aim to explore this, as well as the effect microbes have on flow, and ultimately oil production. Implementation of the mathematical model has been done in MATLAB by using a new, fully implicit, iterative approach, to cope with the fact that the dynamic capillarity induces an additional temporal derivative in the two-phase model. The spatial dis- cretization has been carried out with the use of a control volume method, the TPFA, on a cell-centered grid in one dimension. The scheme is related to the papers [1-3]. The effects of dynamic capillary pressure are shown to be small at the macroscale for realistic oil reservoirs, while clearly visible in an extreme case which have been set up. Regarding microbial activity we have constructed relations between concentration and interfacial tension based on the work in [4-8]. This is done to model the effect of reduced fluid-fluid tension on flow and further the oil production. It is shown that a substantial concentration of microbes have a positive effect on the production, while small concentrations do not differ significantly from the case of no concentration.
Sat, 27 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/102622015-06-27T00:00:00ZApplication of Vertically-Integrated Models with Subscale Vertical Dynamics to Field Sites for CO2 Sequestration
http://hdl.handle.net/1956/10095
Application of Vertically-Integrated Models with Subscale Vertical Dynamics to Field Sites for CO2 Sequestration
Guo, B; Bandilla, Karl W.; Keilegavlen, Eirik; Doster, Florian; Celia, Michael A.
Journal article
<p>To address the engineering questions on security issues of geological carbon sequestration (GCS), a broad range of computational models with different levels of complexity have been developed - from multiphase multicomponent fully coupled three-dimensional models to simplified analytical solutions. Within this wide range of models, a family of so-called vertical equilibrium (VE) models has been developed. These models assume that CO2 and brine have segregated due to buoyancy and reached a hydrostatic pressure distribution in the vertical direction. Such VE models are computationally efficient due to the dimensional reduction in the vertical, and accurate as long as the VE assumption is satisfied. However, a study comparing results from a VE model to results from a full three- dimensional model found that there are realistic conditions for which the VE assumption is not justified, especially for geological formations that have low vertical permeability, on the order of 10 milliDarcy or lower [1].
</p><p>
In an attempt to overcome the VE limitation of the vertically-integrated models, a new type of vertically-integrated model which relaxes the VE assumption while still using a vertically-integrated framework has been developed [2]. The new vertically-integrated model is cast into a multiscale framework, where the coarse scale is the horizontal domain and the fine scale is the vertical domain corresponding to the thickness of a geologic formation. This type of model maintains much of the computational advantages of the VE models, while allowing a much wider range of problem to be modelled. In this paper, we extend the model in [2] to include horizontally layered geologic heterogeneities and develop a new dynamic reconstruction model, which we refer to as a “multi-layer dynamic reconstruction” model. The model in [2] is called “single-layer dynamic reconstruction” model to distinguish the modelling approaches. We apply both dynamic reconstruction models to field injection sites, including a hypothetical injection scenario into the Mount Simon formation in the Illinois Basin, USA and the well-known industrial-scale injection into the Utsira formation at the Sleipner site in Norway. The modelling results show that the multi-layer dynamic reconstruction model is capable of dealing with horizontally layered heterogeneities and gives results that agree reasonably well with results from the full multi-dimensional model, although in geologic layers with high permeability the reconstruction algorithm is not able to fully capture the horizontal driving forces due to buoyancy. This could be important over long time scales for highly permeable geologic layers. The single- layer dynamic reconstruction model was shown be to the right model choice for homogeneous formations with relatively low permeability where it takes a long time for CO2 and brine to reach vertical equilibrium [2]. In this study, we found that for homogeneous formations with high permeability and steep capillary pressure curves, the single-layer dynamic reconstruction model gives results that are analogous to vertical equilibrium models, with the fast segregation dynamics requiring small time steps in the dynamic reconstruction algorithm. As such vertical equilibrium models are the appropriate choice for systems with high permeability, although we also note that the behaviour of the brine relative permeability curve at low brine saturations is an additional important consideration and must be included in the overall analysis.</p>
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/100952014-01-01T00:00:00ZAssociation between body height and chronic low back pain: a follow-up in the Nord-Trøndelag Health Study
http://hdl.handle.net/1956/10052
Association between body height and chronic low back pain: a follow-up in the Nord-Trøndelag Health Study
Heuch, Ingrid; Heuch, Ivar; Hagen, Knut; Zwart, John-Anker
Journal article
<p>Objective: To study potential associations between body height and subsequent occurrence of chronic low back pain (LBP).</p>
<p>Design: Prospective cohort study.</p>
<p>Setting: The North-Trøndelag Health Study (HUNT). Data were obtained from a whole Norwegian county in the HUNT2 (1995–1997) and HUNT3 (2006–2008) surveys.</p>
<p>Participants: Altogether, 3883 women and 2662 men with LBP, and 10 059 women and 8725 men without LBP, aged 30–69 years, were included at baseline and reported after 11 years whether they suffered from LBP.</p>
<p>Main outcome measure: Chronic LBP, defined as pain persisting for 3 months during the previous year.</p>
<p>Results: Associations between body height and risk and recurrence of LBP were evaluated by generalised linear modelling. Potential confounders, such as BMI, age, education, employment, physical activity, smoking, blood pressure and lipid levels were adjusted for. In women with no LBP at baseline and body height ≥170 cm, a higher risk of LBP was demonstrated after adjustment for other risk factors (relative risk 1.19, 95% CI 1.03 to 1.37; compared with height <160 cm). No relationship was established among men or among women with LBP at baseline.</p>
<p>Conclusions: In women without LBP, a body height ≥170 cm may predispose to chronic LBP 11 years later. This may reflect mechanical issues or indicate a hormonal influence.</p>
Mon, 15 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/100522015-06-15T00:00:00ZSkipped spawning in Northeast Arctic haddock Melanogrammus aeglefinus
http://hdl.handle.net/1956/10051
Skipped spawning in Northeast Arctic haddock Melanogrammus aeglefinus
Skjæraasen, Jon Egil; Korsbrekke, Knut; Nilsen, Trygve; Fonn, Merete; Kjesbu, Olav Sigurd; Dingsør, Gjert Endre; Nash, Richard David Marriott
Journal article
Large interannual fluctuations in the numbers of offspring joining a teleost population are common, yet factors affecting offspring production, a key driver of fish population size and demography, are often poorly understood. For some iteroparous teleosts, spawning omission (‘skipping’) following sexual maturation may occur, but this is typically difficult to verify. Through the detection of post-ovulatory follicles, i.e. evidence of past spawning activity, in gonads of females not spawning in the current year, we demonstrate skipping in Northeast Arctic (NEA) haddock Melanogrammus aeglefinus. Based on samples obtained just prior to the main spawning season in the Barents Sea from February to April in 2009 to 2012, the estimated population frequency of skippers ranged from 23 (2009) to 64% (2011) for females ≥35 cm in total length found in this area at this time. Skipping was associated with limited energy reserves and persisted with age, although it appeared to be most common in 5 yr old females. This suggests that skipping is linked to a combination of feeding condition and demography of the fish population. While previously virtually undocumented in this species, this phenomenon appears to be an integral life history feature for NEA haddock and may have a major impact on the annual realised egg production in this population. Finally, given the similarity between the results reported here and those recently published for NEA cod, we postulate that skipping may be a common occurrence in gadoids undertaking long, energetically demanding spawning migrations.
Wed, 22 Apr 2015 00:00:00 GMThttp://hdl.handle.net/1956/100512015-04-22T00:00:00ZBrøkforståing hjå elevar som startar i den vidaregåande skulen
http://hdl.handle.net/1956/9949
Brøkforståing hjå elevar som startar i den vidaregåande skulen
Antun, Jorunn
Master thesis
Målet for denne oppgåva har vore å få større
innsikt i kva brøkforståing elevar som startar
i den vidaregåande skulen kan ha. To
forskingsspørsmål har lagt til grunn for
arbeidet mitt:
1.Kva forståing av og dugleikar i brøk og
brøkrekning kan ein finna hjå elevar som
startar i den vidaregåande skulen?
2.Kva misoppfatningar rundt omgrepet brøk og
brøkrekning kan ein finna hjå elevane?
Trettito elevar har delteke i undersøkinga. Dei
har nett starta i den vidaregåande skulen,
etter å ha gått på ulike ungdomsskular spreidd
over heile landet. Ein skriftleg
kartleggingstest gav informasjon om kva elevane
meistra innan sentrale aspekt ved brøk og
moglege misoppfatningar knytt til brøk og
brøkrekning. Deretter vart om lag ein tredel av
informantane intervjua for å få større innsikt
i korleis dei tenkjer. Datamaterialet vart
analysert ut frå eit konstruktivistisk syn på
læring.
Undersøkinga mi viser eit stort spenn med omsyn
til brøkforståing. Utfordringar kjem i særleg
grad til synes i overgangane mellom kjennskap
til prosedyrar, iverksetjing av prosedyrar og
forståing for korleis desse faktisk verkar.
Nokre elevar har berre fragmentarisk innsikt i
prosedyrar. Andre elevar har eit isolert fokus
på reglar; dei er kjend med ulike prosedyrar og
nyttar algoritmar ved operasjonar på brøk. Feil
som kjem fram er knytt til faktisk utføring av
prosedyrane; ulike reglar vert blanda saman,
eller det stoppar opp for elevar når ei
prosedyre ikkje kan nyttast direkte. Ein ser
vidare døme på elevar som evnar å sjå
samanhengar, og som i praksis greier å setja
saman ulike prosedyrar og nytta brøkomgrepet i
ulike kontekstar. Mange greier likevel fortsatt
ikkje forklara kva dei gjer og kvifor dei gjer
det dei gjer. Mangel på forståing ser ut til å
vera den gjennomgåande faktoren som set grenser
for meistring på høgare nivå. Samla sett får
ein her stadfesta at eit grunnlag bygd på
dugleikar utan forståing er laust fundamentert;
det skal lite til før elevane vert usikre eller
at reknereglar vert gløymd.
Manglande forståing gjer at ulike
misoppfatningar kjem til syne, både når brøkar
skal ordnast og samanliknast, når ekvivalente
brøkar skal lagast og ved operasjonar på brøk.
Undersøkinga viser m.a. at fleire elevar
overfører kunnskap om bruk av dei naturlege
tala til brøk og brøkrekning, og at elevar
strevar med å sjå og bruka ideen om ekvivalente
brøkar i ein større samanheng. Dei avdekka
misoppfatningane gjev samla eit vidare innsyn i
grunnleggjande manglar knytt til brøkforståing
hjå elevar som startar i den vidaregåande
skulen.
Thu, 30 Apr 2015 00:00:00 GMThttp://hdl.handle.net/1956/99492015-04-30T00:00:00ZEntropy solutions of the compressible Euler equations
http://hdl.handle.net/1956/9868
Entropy solutions of the compressible Euler equations
Svärd, Magnus
Journal article
We consider the three-dimensional Euler equations of gas dynamics
on a bounded periodic domain and a bounded time interval. We prove that
Lax-Friedrichs scheme can be used to produce a sequence of solutions with
ever finer resolution for any appropriately bounded (but not necessarily small)
initial data. Furthermore, we show that if the density remains strictly positive
in the sequence of solutions at hand, a subsequence converges to an entropy
solution. We provide numerical evidence for these results by computing a
sensitive Kelvin-Helmholtz problem.
Submitted to BIT Numerical Mathematics.
Mon, 11 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/98682015-05-11T00:00:00ZSUCCESS: SUbsurface CO2 storage–Critical elements and superior strategy
http://hdl.handle.net/1956/9794
SUCCESS: SUbsurface CO2 storage–Critical elements and superior strategy
Aker, Eyvind; Bjørnarå, Tore Ingvald; Braathen, Alvar; Brandvoll, Øyvind; Dahle, Helge K.; Nordbotten, Jan Martin; Aagaard, Per; Hellevang, Helge; Alemu, Binyam Lema; Pham, Van Thi Hai; Johansen, Harald; Wangen, Magnus; Nøttvedt, Arvid; Aavatsmark, Ivar; Johannessen, Truls; Durand, Dominique
Journal article
SUCCESS is a Center for Environmental Energy Research in Norway and performs research related to geological storage of CO2 in the subsurface. The SUCCESS centre is established by the Research Council of Norway together with several Norwegian research institutes and universities. The centre is hosted by Christian Michelsen Research. Through international cooperation and open research the SUCCESS centre will fill gaps in strategic knowledge and provide a system for learning and development of new competency to ensure safe and effective CO2 injection, storage and monitoring. In this paper we briefly present the main focus areas of the centre and some recent results obtained by the research partners. The results relate to geochemical effects, reservoir modeling, monitoring the geomechanical respond and the marine environment. A brief status on the field trial, Longyearbyen CO2 Lab, at Svalbard is also provided.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97942011-01-01T00:00:00ZField-case simulation of CO2-plume migration using vertical-equilibrium models
http://hdl.handle.net/1956/9791
Field-case simulation of CO2-plume migration using vertical-equilibrium models
Nilsen, Halvor Møll; Herrera, Paulo; Ashraf, Seyed Meisam; Ligaard, Ingeborg S.; Iding, Martin; Hermanrud, Christian; Lie, Knut-Andreas; Nordbotten, Jan Martin; Dahle, Helge K.; Keilegavlen, Eirik
Journal article
When injected in deep saline aquifers, CO2 moves radially away from the injection well and progressively higher in the formation because of buoyancy forces. Analyzes have shown that after the injection period, CO2 will potentially migrate over several kilometers in the horizontal direction but only tens of meters in the vertical direction, limited by the aquifer caprock. Because of the large horizontal plume dimensions, three-dimensional numerical simulations of the plume migration over long periods of time are computationally intensive. Thus, to get results within a reasonable time frame, one is typically forced to use coarse meshes and long time steps which result in inaccurate results because of numerical errors in resolving the plume tip.
Given the large aspect ratio between the vertical and horizontal plume dimensions, it is reasonable to approximate the CO2 migration using vertically averaged models. Such models can, in many cases, be more accurate than coarse three-dimensional computations. In particular, models based on vertical equilibrium (VE) are attractive to simulate the long-term fate of CO2 sequestered into deep saline aquifers. The reduced spatial dimensionality resulting from the vertical integration ensures that the computational performance of VE models exceeds the performance of standard three-dimensional models. Thus, VE models are suitable to study the long-time and large-scale behavior of plumes in real large-scale CO2-injection projects. We investigate the use of VE models to simulate CO2 migration in a real large-scale field case based on data from the Sleipner site in the North Sea. We discuss the potential and limitations of VE models and show how VE models can be used to give reliable estimates of long-term CO2 migration. In particular, we focus on a VE formulation that incorporates the aquifer geometry and heterogeneity, and that considers the effects of hydrodynamic and residual trapping. We compare the results of VE simulations with standard reservoir simulation tools on test cases and discuss their advantages and limitations and show how, provided that certain conditions are met, they can be used to give reliable estimates of long-term CO2 migration.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97912011-01-01T00:00:00ZUpscaled modeling of CO2 injection and migration with coupled thermal processes
http://hdl.handle.net/1956/9747
Upscaled modeling of CO2 injection and migration with coupled thermal processes
Gasda, Sarah Eileen; Stephansen, Annette; Aavatsmark, Ivar; Dahle, Helge K.
Journal article
A practical modeling approach for CO2 storage over relatively large length and time scales is the vertical-equilibrium
model, which solves partially integrated conservation equations for flow in two lateral dimensions. We couple heat
transfer within the vertical equilibrium framework for fluid flow, focusing on thermal processes that most impact the
CO2 plume. We investigate a simplified representation of heat exchange that also includes transport of heat within
the plume. In addition, we explore available CO2 thermodynamic models for reliable prediction of density under
different injection pressures and temperatures. The model concept is demonstrated on simplified systems.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/97472013-01-01T00:00:00ZAssessing Model Uncertainties Through Proper Experimental Design
http://hdl.handle.net/1956/9745
Assessing Model Uncertainties Through Proper Experimental Design
Hvidevold, Hilde Kristine; Alendal, Guttorm; Johannessen, Truls; Mannseth, Trond
Journal article
This paper assesses how parameter uncertainties in the model for rise velocity of CO2 droplets in the ocean cause uncertainties in their rise and dissolution in marine waters. The parameter uncertainties in the rise velocity for both hydrate coated and hydrate free droplets are estimated from experiment data. Thereafter the rise velocity is coupled with a mass transfer model to simulate the fate of dissolution of a single droplet.
The assessment shows that parameter uncertainties are highest for large droplets. However, it is also shown that in some circumstances varying the temperature gives significant change in rise distance of droplets.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/97452013-01-01T00:00:00ZRisk of Leakage versus Depth of Injection in Geological Storage
http://hdl.handle.net/1956/9724
Risk of Leakage versus Depth of Injection in Geological Storage
Celia, Michael A.; Nordbotten, Jan Martin; Bachu, Stefan; Dobossy, Mark E.; Court, Benjamin
Journal article
One of the outstanding challenges for large-scale CCS operations is to develop reliable quantitative risk assessments with a focus on leakage of both injected CO2 and displaced brine. A critical leakage pathway is associated with the century-long legacy of oil and gas exploration and production, which has led to many millions of wells being drilled. Many of those wells are in locations that would otherwise be excellent candidates for CCS operations, especially across many parts of North America. Quantitative analysis of the problem requires special computational techniques because of the unique challenges associated with simulation of injection and leakage in systems that include hundreds or thousands of existing wells over domains characterized by layered structures in the vertical direction and very large horizontal extent. An important feature of these kinds of systems is the depth of each well, and the fact that the number of wells penetrating different formations decreases as a function of depth. As such, one might reasonably expect the risk of leakage to decrease with depth of injection. With the special computational models developed to simulate injection and leakage along multiple wells, in layered systems with multiple formations, quantitative assessment of risk reduction as a function of injection depth can be made. An example of such a system corresponds to the Wabamun Lake area southwest of Edmonton, Alberta, Canada, where several large coal-fired power plants are located. Use of information about both the existing wells and the local stratigraphy allows a realistic model to be constructed. Leakage along existing wells is assumed to follow Darcy’s Law, and is characterized by a set of effective permeability values. These values are assigned stochastically, using several different methods, within a Monte Carlo simulation framework. Computational results show the clear trade-off between depth of injection and risk of leakage. The results also show how properties within the different formations affect the risk profiles. In the Wabamun Lake area, one of the formations has the highest injectivity, by far, while having a moderate number of existing wells. Its moderate risk of leakage, as compared to injections in formations above and below, shows some of the key factors that are likely to influence injection design for large-scale CCS operations.
Sun, 01 Feb 2009 00:00:00 GMThttp://hdl.handle.net/1956/97242009-02-01T00:00:00ZActive and integrated management of water resources throughout CO2 capture and sequestration operations
http://hdl.handle.net/1956/9720
Active and integrated management of water resources throughout CO2 capture and sequestration operations
Court, Benjamin; Celia, Michael A.; Nordbotten, Jan Martin; Elliot, Thomas R.
Journal article
Most projected climate change mitigation strategies will require a significant expansion of CO2 Capture and Sequestration (CCS) in the next two decades. Four major categories of challenges are being actively researched: CO2 capture cost, geological sequestration safety, legal and regulatory barriers, and public acceptance. Herein we propose an additional major challenge category across all CCS operations: Water management. For example a coal-fired power plant retrofitted for CCS requires twice as much cooling water as the original plant. This increased demand may be accommodated by brine extraction and treatment, which would concurrently function as large-scale pressure management and a potential source of freshwater. At present the interactions among freshwater extraction, CO2 injection, and brine management are being considered too narrowly -in the case of freshwater almost completely overlooked- in the technical and regulatory CCS community. This paper presents an overview of each of these challenges and potential integration opportunities. Active management of CCS operations through an integrated approach -including brine production, treatment, use for cooling, and partial reinjection- can address challenges simultaneously with several synergistic advantages. The paper also considers the related potential impacts of pore space competition (with future groundwater use, gas storage and shale gas) on CCS expansion. Freshwater and brine must become key decision making inputs throughout CCS operations, building on existing successful industrial-scale integrations.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97202011-01-01T00:00:00ZAn efficient software framework for performing industrial risk assessment of leakage for geological storage of CO2
http://hdl.handle.net/1956/9719
An efficient software framework for performing industrial risk assessment of leakage for geological storage of CO2
Dobossy, Mark E.; Celia, Michael A.; Nordbotten, Jan Martin
Journal article
In response to anthropogenic CO2 emissions, geological storage has emerged as a practical and scalable bridge technology while renewables and other environmentally friendly energy production methods mature. While an attractive solution, geological storage of CO2 has inherent risk. Two primary concerns are recognized: (1) leakage of CO2 through caprock imperfections, and (2) brine displacement resulting in contamination of drinking water sources. Three mechanisms for both CO2 and brine leakage have been identified: diffuse leakage through the caprock, leakage through faults and fractures in the caprock, and finally, leakage through man-made pathways such as abandoned wells from oil and gas exploration. While the first two leakage mechanisms are important, we emphasize the risks associated with the presence of abandoned wells. This is due to the large number and density of wells from a history of oil and gas exploration around the world, and the high degree of uncertainty surrounding the properties of these abandoned wells. With current proposed legislation in both the United States and Europe, a need is emerging for practical assessment of leakage risk. In order to accurately predict leakage of brine and CO2 from the injection layer, the geological information for the injection site and the location and makeup of the man-made leakage pathways previously alluded to must be taken into account. Unfortunately, both the geology and abandoned well metadata are typically high in uncertainty, which must be accounted for. With such a high number of random variables, the current state of the art is running many realizations of a system, using a Monte Carlo approach. This requires that the underlying solution algorithms be accurate, and efficient. In the past, many researchers in both academia and industry have turned to robust numerical analysis packages used in the oil industry. However, due to the large range of scales important to this problem (domains of tens of kilometers on a side affected by leakage pathways with diameters of tens of centimeters) such modeling techniques become computationally expensive for all but the most basic analysis. A computational model developed at Princeton University, and currently being commercialized by Geological Storage Consultants, LLC has been shown to be efficient with sufficient accuracy to allow for comprehensive risk assessment of CO2 injection projects. The model allows for mixing solution methods- using computationally expensive algorithms for formations of greater importance (e.g.- the injection formation) and more efficient, simplified algorithms in other areas of the domain. This ability to arbitrarily mix solution methods offers significant flexibility in the design and execution of models. This paper addresses the framework and algorithms used, and illustrates the importance of efficiency and parallelism using the case study of an injection site in Alberta, Canada. We show how the framework can be used for project planning, for risk mitigation (insurance), and for regulatory groups. Finally, the importance of flexible analysis tools that allow for efficient and effective management of computational resources is discussed.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97192011-01-01T00:00:00ZDetecting leakage of brine or CO2 through abandoned wells in a geological sequestration operation using pressure monitoring wells
http://hdl.handle.net/1956/9718
Detecting leakage of brine or CO2 through abandoned wells in a geological sequestration operation using pressure monitoring wells
Nogues, Juan P.; Nordbotten, Jan Martin; Celia, Michael A.
Journal article
For risk assessment, policy design and GHG emission accounting it is extremely important to know if any CO2 or brine has leaked from a geological sequestration (GS) operation. As such, it is important to understand if it is possible to use certain technologies to detect it. This detection of leakage is one of the most challenging problems associated with GS due to the high uncertainty in the nature and location of leakage pathways. In North America for example millions of legacy oil and gas wells present the possibility of CO2 and brine to leak out of the injection formation. The available information for these potential leaky wells is very limited and the main parameters that control leakage, like permeability of the sealing material are not known. Here we propose to explore the possibility of detecting such leakage by the use of pressure-monitoring wells located in a formation overlying the injection formation. The detection analysis is based on a system of equations that solve for the propagation of a pressure pulse using the superposition principle and an approximation to the well function. We explore the questions of what can be gained by using pressure-monitoring wells and what are the limitations given a specific accuracy threshold of the measuring device. We also try to answer the question of where these monitoring wells should be placed to optimize the objective of a monitoring scheme. We believe these results can ultimately lead to practical design strategies for monitoring schemes, including quantitative estimation of increased probability of leak detection per added observation well.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97182011-01-01T00:00:00ZThe impact of local-scale processes on large-scale CO2 migration and immobilization
http://hdl.handle.net/1956/9717
The impact of local-scale processes on large-scale CO2 migration and immobilization
Gasda, Sarah Eileen; Nordbotten, Jan Martin; Celia, Michael A.
Journal article
<p>Storage security of injected carbon dioxide (CO2) is an essential component of risk management for geological carbon sequestration operations. During the injection and early post-injection periods, CO2 leakage may occur along faults and leaky wells, but this risk may be partly managed by proper site selection and sensible deployment of monitoring and remediation technologies. On the other hand, long-term storage security is an entirely different risk management problem—one that is dominated by a mobile CO2 plume that may travel over very large spatial distances, over long time periods, before it is trapped by a variety of different physical and chemical processes. In the post-injection phase, the mobile CO2 plume migrates in large part due to buoyancy forces, following the natural topography of the geological formation. The primary trapping mechanisms are capillary and solubility trapping, which evolve over thousands to tens of thousands of years and can immobilize a significant portion of the mobile, free-phase CO2 plume. However, both the migration and trapping processes are inherently complex, involving a combination of small and large spatial scales and acting over a range of time scales. Solubility trapping is a prime example of this complexity, where small-scale density instabilities in the dissolved CO2 region leads to convective mixing that has that has a significant effect on the large-scale dissolution process over very long time scales. Another example is the effect of capillary forces on the evolution of mobile CO2, an often-neglected process except with regard to residual trapping. As the plume migrates due to buoyancy and viscous forces, local capillary effects acting at the CO2-brine interface lead to a transition zone where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Using appropriate models that can capture both large and small-scale effects is essential for understanding the role of these processes on the long-term storage security of CO2 sequestration operations.</p>
<p>There are several approaches to modeling long-term CO2 trapping mechanisms. One modeling option is the use of traditional numerical methods, which are often highly sophisticated models that can handle multiple complex phenomena with high levels of accuracy. However, these complex models quickly become prohibitively expensive for the type of large-scale, long-term modeling that is necessary for risk assessment applications such as the late post-injection period. We present an alternative modeling option that combines vertically-averaged governing equations with an upscaled representation of the dissolution-convective mixing process and the local capillary transition zone at the CO2-brine interface. CO2 injection is solved numerically on a coarse grid, capturing the large-scale injection problem and the post-injection capillary trapping, while the upscaled dissolution and capillary fringe models capture these subscale effects and eliminate the need for expensive grid refinement to capture the subscale instabilities associated with convective mixing or the details of the capillary transition zone. With this modeling approach, we demonstrate the effect of different modeling choices associated with dissolution and capillary processes for typical large-scale geological systems.</p>
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97172011-01-01T00:00:00ZA 3D computational study of effective medium methods applied to fractured media
http://hdl.handle.net/1956/9705
A 3D computational study of effective medium methods applied to fractured media
Sævik, Pål Næverlid; Berre, Inga; Jakobsen, Morten; Lien, Martha
Journal article
This work evaluates and improves upon existing effective medium methods for permeability upscaling in fractured media. Specifically, we are concerned with the asymmetric self-consistent, symmetric self-consistent, and differential methods. In effective medium theory, inhomogeneity is modeled as ellipsoidal inclusions embedded in the rock matrix. Fractured media correspond to the limiting case of flat ellipsoids, for which we derive a novel set of simplified formulas. The new formulas have improved numerical stability properties, and require a smaller number of input parameters. To assess their accuracy, we compare the analytical permeability predictions with three-dimensional finite-element simulations. We also compare the results with a semi-analytical method based on percolation theory and curve-fitting, which represents an alternative upscaling approach. A large number of cases is considered, with varying fracture aperture, density, matrix/fracture permeability contrast, orientation, shape, and number of fracture sets. The differential method is seen to be the best choice for sealed fractures and thin open fractures. For high-permeable, connected fractures, the semi-analytical method provides the best fit to the numerical data, whereas the differential method breaks down. The two self-consistent methods can be used for both unconnected and connected fractures, although the asymmetric method is somewhat unreliable for sealed fractures. For open fractures, the symmetric method is generally the more accurate for moderate fracture densities, but only the asymmetric method is seen to have correct asymptotic behavior. The asymmetric method is also surprisingly accurate at predicting percolation thresholds.
Tue, 01 Oct 2013 00:00:00 GMThttp://hdl.handle.net/1956/97052013-10-01T00:00:00ZDetermining individual variation in growth and its implication for life-history and population processes using the Empirical Bayes method
http://hdl.handle.net/1956/9682
Determining individual variation in growth and its implication for life-history and population processes using the Empirical Bayes method
Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J.; Munch, Stephan; Skaug, Hans J.
Journal article
The differences in demographic and life-history processes between organisms living in the same population have important
consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the
investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We
use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy
growth model with random effects. To illustrate the power and generality of the method, we consider two populations of
marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more
than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as
potential predictors of the von Bertalanffy growth function’s parameters k (rate of growth) and L∞ (asymptotic size). Our
results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to
the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as
the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both
populations, models including density during the first year of life showed that growth tended to decrease with increasing
population density early in life. Model validation showed that predictions of individual growth trajectories using the
random-effects model were more accurate than predictions based on mean size-at-age of fish.
Thu, 11 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/96822014-09-11T00:00:00ZInvestigating population genetic structure in a highly mobile marine organism: The minke whale Balaenoptera acutorostrata acutorostrata in the North East Atlantic
http://hdl.handle.net/1956/9675
Investigating population genetic structure in a highly mobile marine organism: The minke whale Balaenoptera acutorostrata acutorostrata in the North East Atlantic
Sanchez, Maria Quintela; Skaug, Hans J.; Øien, Nils Inge; Haug, Tore; Seliussen, Bjørghild Breistein; Solvang, Hiroko Kato; Pampoulie, Christophe; Kanda, Naohisa; Pastene, Luis A.; Glover, Kevin
Journal article
Inferring the number of genetically distinct populations and their levels of connectivity is of key importance for the
sustainable management and conservation of wildlife. This represents an extra challenge in the marine environment where
there are few physical barriers to gene-flow, and populations may overlap in time and space. Several studies have
investigated the population genetic structure within the North Atlantic minke whale with contrasting results. In order to
address this issue, we analyzed ten microsatellite loci and 331 bp of the mitochondrial D-loop on 2990 whales sampled in
the North East Atlantic in the period 2004 and 2007–2011. The primary findings were: (1) No spatial or temporal genetic
differentiations were observed for either class of genetic marker. (2) mtDNA identified three distinct mitochondrial lineages
without any underlying geographical pattern. (3) Nuclear markers showed evidence of a single panmictic population in the
NE Atlantic according STRUCTURE’s highest average likelihood found at K = 1. (4) When K = 2 was accepted, based on the
Evanno’s test, whales were divided into two more or less equally sized groups that showed significant genetic
differentiation between them but without any sign of underlying geographic pattern. However, mtDNA for these
individuals did not corroborate the differentiation. (5) In order to further evaluate the potential for cryptic structuring, a set
of 100 in silico generated panmictic populations was examined using the same procedures as above showing genetic
differentiation between two artificially divided groups, similar to the aforementioned observations. This demonstrates that
clustering methods may spuriously reveal cryptic genetic structure. Based upon these data, we find no evidence to support
the existence of spatial or cryptic population genetic structure of minke whales within the NE Atlantic. However, in order to
conclusively evaluate population structure within this highly mobile species, more markers will be required.
Tue, 30 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/96752014-09-30T00:00:00ZNumerical solutions of two-phase flow with applications to CO2 sequestration and polymer flooding
http://hdl.handle.net/1956/9666
Numerical solutions of two-phase flow with applications to CO2 sequestration and polymer flooding
Mykkeltvedt, Trine Solberg
Doctoral thesis
<p>This thesis addresses challenges related to mathematical and numerical modeling of flow in porous media. To address these challenges, two applications are considered: firstly, counter-current two-phase flow in a heterogeneous porous media and secondly, polymer flooding in the context of enhanced oil recovery. Furthermore, an upscaled model for CO2 migration is used to estimate effective rates of convective mixing from commercial-scale injection.</p>
<p>Numerically, the upstream mobility scheme is widely used to solve hyperbolic conservation laws. For flow in heterogeneous porous media there exists no convergence analysis for this scheme. Studies of the convergence performance of this scheme are important due to the extensive use of the upstream mobility scheme in the reservoir simulation community. We show that the upstream mobility scheme may exhibit large errors compared to the physically relevant solution when applied to a counter-current flow in a reservoir where discontinuities in the flux function are introduced through the permeability. A small perturbation of the relative permeability values can lead to a large difference in the solution produced by the upstream mobility scheme. Not only does the scheme encounter large errors compared to what is considered to be the physically relevant solution, but the solution also lacks entropy consistency.</p>
<p>High-resolution schemes are often used for model problems where high accuracy is required in the presence of shocks or discontinuities. Polymer flooding represents such a system and is a difficult process to model, especially since the dynamics of the flow lead to concentration fronts that are not self-sharpening. The application of modern high-resolution schemes to a system that models polymer flooding is considered and different first- and higher-order schemes are compared in terms of how the discontinuities are treated. Through numerous numerical experiments some special numerical artifacts of the polymer system are uncovered. The need of high-resolution schemes and the importance of their applicability for the polymer problem is addressed.</p>
<p>The process of CO2 migration ranges over multiple scales and results in challenges when it comes to modeling and simulation of this system. This expresses the need for a upscaled model and upscaled parameters that can capture both large and smallscale spatial and temporal effects. The ongoing CO2-injection at the Utsira formation is considered as a field-scale study for CO2 storage. Through an upscaled model for CO2 migration we get the first field-scale estimates of the effective upscaled convective mixing rates in this context. The findings are comparable but somewhat higher than reported in the existing literature based on fine-scale numerical simulations. Our work validates the use of numerical simulations to obtain upscaled convective mixing rates, while at the same time validating that convective mixing is an important quantifiable storage mechanism at the Utsira formation. To account for uncertainties in the description of the storage formation, sensitivity studies are conducted relative to some of the most uncertain parameters.</p>
Fri, 19 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/96662014-12-19T00:00:00ZDo abnormal serum lipid levels increase the risk of chronic low back pain? The Nord-Trøndelag Health Study
http://hdl.handle.net/1956/9614
Do abnormal serum lipid levels increase the risk of chronic low back pain? The Nord-Trøndelag Health Study
Heuch, Ingrid; Heuch, Ivar; Hagen, Knut; Zwart, John-Anker
Journal article
Background: Cross-sectional studies suggest associations between abnormal lipid levels and prevalence of low back pain (LBP), but it is not known if there is any causal relationship.
Objective: The objective was to determine, in a population-based prospective cohort study, whether there is any relation between levels of total cholesterol, high density lipoprotein (HDL) cholesterol and triglycerides and the probability of experiencing subsequent chronic (LBP), both among individuals with and without LBP at baseline.
Methods: Information was collected in the community-based HUNT 2 (1995–1997) and HUNT 3 (2006–2008) surveys of an entire Norwegian county. Participants were 10,151 women and 8731 men aged 30–69 years, not affected by chronic LBP at baseline, and 3902 women and 2666 men with LBP at baseline. Eleven years later the participants indicated whether they currently suffered from chronic LBP.
Results: Among women without LBP at baseline, HDL cholesterol levels were inversely associated and triglyceride levels positively associated with the risk of chronic LBP at end of follow-up in analyses adjusted for age only. Adjustment for the baseline factors education, work status, physical activity, smoking, blood pressure and in particular BMI largely removed these associations (RR: 0.96, 95% CI: 0.85–1.07 per mmol/l of HDL cholesterol; RR: 1.16, 95% CI: 0.94–1.42 per unit of lg(triglycerides)). Total cholesterol levels showed no associations. In women with LBP at baseline and men without LBP at baseline weaker relationships were observed. In men with LBP at baseline, an inverse association with HDL cholesterol remained after complete adjustment (RR: 0.83, 95% CI: 0.72–0.95 per mmol/l).
Conclusion: Crude associations between lipid levels and risk of subsequent LBP in individuals without current LBP are mainly caused by confounding with body mass. However, an association with low HDL levels may still remain in men who are already affected and possibly experience a higher pain intensity.
Thu, 18 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/96142014-09-18T00:00:00ZIdentification of subsurface structures using electromagnetic data and shape priors
http://hdl.handle.net/1956/9612
Identification of subsurface structures using electromagnetic data and shape priors
Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond
Journal article
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source
electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity
can be small, regularization is needed to preserve structural information. We propose to combine
two approaches for regularization of the inverse problem. In the first approach we utilize a model-based,
reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate
number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved
using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained
using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity
representation is suitable for incorporation of structural prior information. Such prior information can,
however, not be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the
structural information using shape priors. The shape prior technique requires the choice of kernel function,
which is application dependent. We argue for using the conditionally positive definite kernel which is shown
to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical
experiments on various test cases show that the methodology is able to identify fairly complex subsurface
electric conductivity distributions while preserving structural prior information during the inversion.
Sun, 01 Mar 2015 00:00:00 GMThttp://hdl.handle.net/1956/96122015-03-01T00:00:00ZInversion of CSEM Data for Subsurface Structure Identification and Numerical Assessment of the Upstream Mobility Scheme
http://hdl.handle.net/1956/9611
Inversion of CSEM Data for Subsurface Structure Identification and Numerical Assessment of the Upstream Mobility Scheme
Tveit, Svenn
Doctoral thesis
<p>Part I</p>
<p>In this part of the thesis, two different methodologies for solving the inverse problem of mapping the subsurface electric conductivity distribution using controlled source electromagnetic (CSEM) data are presented. The two inversion methodologies are based on a classical and a Bayesian approach for solving inverse problems, respectively.</p>
<p>In the classical approach, we regularize the inverse problem by incorporating structural prior information available from, e.g., interpreted seismic data. In many cases, the outcome of an interpretation of seismic data cannot be well approximated by a Gaussian distribution. Hence, to incorporate non-Gaussian prior information we have applied the shape prior technique. Here, an implicit transformation of variables facilitates the incorporation of non-Gaussian prior information, at the expense of an applicationdependent kernel function.</p>
<p>In the Bayesian approach, a combination of prior knowledge and observed data results in a solution given as a posterior probability density function (PDF). To sample from the posterior PDF, a sequential Bayesian method, the ensemble Kalman filter (EnKF), is applied. Structural prior information is naturally incorporated as a part of the Bayesian framework.</p>
<p>To represent large-scale subsurface structures two model-based, composite parameterizations based on the level-set representation are applied in the inversion methodologies. By using a reduced number of parameters in the representation, a regularization of the inverse problem is achieved. Moreover, it enables the use of second-order gradientbased optimization algorithms in the classical approach.</p>
<p>Part II</p>
<p>In this part of the thesis, a numerical investigation of the upstream mobility scheme for calculating fluid flow in porous media is presented. Previous studies have shown that the upstream mobility scheme experienced erroneous behaviour when approximating pure gravity segregation flow in 1D heterogeneous porous media. The errors shown, however, were small in magnitude. In this work, numerical experiments, where we include both advection and gravity segregation, are conducted. It is shown that the errors produced in this case may be larger in magnitude than for pure gravity segregation, but are only found for countercurrent flow situations.</p>
Fri, 27 Mar 2015 00:00:00 GMThttp://hdl.handle.net/1956/96112015-03-27T00:00:00ZInexact linear solvers for control volume discretizations in porous media
http://hdl.handle.net/1956/9588
Inexact linear solvers for control volume discretizations in porous media
Keilegavlen, Eirik; Nordbotten, Jan Martin
Journal article
We discuss the construction ofmulti-level inexact
linear solvers for control volume discretizations for porous
media. The methodology forms a contrast to standard iterative
solvers by utilizing an algebraic hierarchy of approximations
which preserve the conservative structure of the
underlying control volume. Our main result is the generalization
of multiscale control volume methods as multilevel
inexact linear solvers for conservative discretizations
through the design of a particular class of preconditioners.
This construction thereby bridges the gap between
multiscale approximation and linear solvers. The resulting
approximation sequence is referred to as inexact solvers.
We seek a conservative solution, in the sense of controlvolume
discretizations, within a prescribed accuracy. To this
end, we give an abstract guaranteed a posteriori error bound
relating the accuracy of the linear solver to the underlying
discretization. These error bounds are explicitly computable
for the grids considered herein. The afore-mentioned hierarchy
of conservative approximations can also be considered
in the context of multi-level upscaling, and this perspective
is highlighted in the text as appropriate. The new
construction is supported by numerical examples highlighting
the performance of the inexact linear solver realized
in both a multi- and two-level context for two- and threedimensional
heterogeneous problems defined on structured
and unstructured grids. The numerical examples assess the
performance of the approach both as an inexact solver, as
well in comparison to standard algebraic multigrid methods.
Wed, 03 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/95882014-12-03T00:00:00ZCapturing the coupled hydro-mechanical processes occurring during CO2 injection – example from In Salah
http://hdl.handle.net/1956/9489
Capturing the coupled hydro-mechanical processes occurring during CO2 injection – example from In Salah
Bjørnarå, Tore Ingvald; Mathias, Simon A.; Nordbotten, Jan Martin; Park, Joonsang; Bohloli, Bahman
Journal article
At In Salah, CO2 is removed from the production stream of several natural gas fields and re-injected into a deep and relatively thin
saline formation, in three different locations. The observed deformation on the surface above the injection sites have partly been
contributed to expansion and compaction of the storage aquifer, but analysis of field data and measurements from monitoring has
verified that substantial activation of fractures and faults occur. History-matching observed data in numerical models involve
several model iterations at a high computation cost. To address this, a simplified model that captures the key hydro-mechanical
effects, while retaining a reasonable accuracy when applied to realistic field data from In Salah, has been derived and compared to
a fully resolved model. Results from the case study presented here show a significant saving in computational cost (36%) and a
computational speed-up factor of 2.7.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/94892014-01-01T00:00:00ZPseudo H-type algebras and sub-Riemannian cut locus
http://hdl.handle.net/1956/9449
Pseudo H-type algebras and sub-Riemannian cut locus
Autenried, Christian
Doctoral thesis
Fri, 13 Feb 2015 00:00:00 GMThttp://hdl.handle.net/1956/94492015-02-13T00:00:00ZOn weak solutions and a covergent numerical scheme for the compressible Navier-Strokes Equations
http://hdl.handle.net/1956/9442
On weak solutions and a covergent numerical scheme for the compressible Navier-Strokes Equations
Svärd, Magnus
Journal article
In this paper, the three-dimensional compressible Navier-Stokes
equations are considered on a periodic domain. We propose a semi-discrete
numerical scheme and derive a priori bounds that ensures that the resulting
system of ordinary di erential equations is solvable for any h > 0. An a
posteriori examination that density remain uniformly bounded away from 0
will establish that a subsequence of the numerical solutions converges to a
weak solution of the compressible Navier-Stokes equations.
Thu, 26 Feb 2015 00:00:00 GMThttp://hdl.handle.net/1956/94422015-02-26T00:00:00ZAssessment of Sequential and Simultaneous Ensemble-based History Matching Methods for Weakly Non-linear Problems
http://hdl.handle.net/1956/9398
Assessment of Sequential and Simultaneous Ensemble-based History Matching Methods for Weakly Non-linear Problems
Fossum, Kristian
Doctoral thesis
<p>The ensemble Kalman filter (EnKF) has, since its introduction in 1994, gained much attention as a tool for sequential data assimilation in many scientific areas. In recent years, the EnKF has been utilized for estimating the poorly known petrophysical parameters in petroleum reservoir models. The ensemble based methodology has inspired several related methods, utilized both in data assimilation and for parameter estimation. All these methods, including the EnKF, can be shown to converge to the correct solution in the case of a Gaussian prior model, Gaussian data error, and linear model dynamics. However, for many problems, where the methods are applied, this is not satisfied. Moreover, several numerical studies have shown that, for such cases, the different methods have different approximation error.</p>
<p>Considering parameter estimation for problems where the model depends on the parameters in a non-linear fashion, this thesis explore the similarities and differences between the EnKF and the alternative methods. Several characteristics are established, and it is shown that each method represents a specific combination of these characteristics. By numerical comparison, it is further shown that a variation of the characteristics produce a variation of the approximation error.</p>
<p>A special emphasis is put on the effect of one characteristic, whether data are assimilated sequentially or simultaneously. Typically, several data types are utilized in the parameter estimation problem. In this thesis, we assume that each data depends on the parameters in a specific non-linear fashion. Considering the assimilation of two weakly non-linear data with different degree of non-linearity, we show, through analytical studies, that the difference between sequential and simultaneous assimilation depends on the combination of data.</p>
<p>Via numerical modelling, we investigate the difference between sequential and simultaneous assimilation on toy models and simplified reservoir problems. Utilizing realistic reservoir data, we show that the assumption of difference in non-linearity for different data holds. Moreover, we demonstrate that, for favourable degree of nonlinearity, it is beneficial to assimilate the data ordered after ascending degree of nonlinearity.</p>
Fri, 13 Feb 2015 00:00:00 GMThttp://hdl.handle.net/1956/93982015-02-13T00:00:00ZDynamic Capillary Effects in the Simulation of Flow and Transport in Porous Media: A New Linearisation Method
http://hdl.handle.net/1956/9374
Dynamic Capillary Effects in the Simulation of Flow and Transport in Porous Media: A New Linearisation Method
Teveldal, Silje Kjønaas
Master thesis
In this thesis mathematical models with and without dynamic capillary effects are developed to model water flow and solute transport through a porous medium. The system of equations are discretised using the finite volume method TPFA in space and the backward Euler method in time. To solve the nonlinear systems appearing at each time step numerically, robust linearisation methods are proposed. These methods do not involve the computation of derivatives. The methods are analysed and have been shown to be linearly convergent and robust. Moreover, the convergence was shown to be independent of mesh size. The influence that the dynamic effects have on flow and transport is studied numerically. Additional numerical experiments were conducted to study the convergence of the linearisation schemes. The numerical results are shown to be in correspondence with the theoretical results.
Thu, 25 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/93742014-12-25T00:00:00ZOn the extension giving the truncated Witt vectors
http://hdl.handle.net/1956/9347
On the extension giving the truncated Witt vectors
Skjøtskift, Torgeir
Master thesis
We explore the theory of cohomology of groups and
the classification of group extensions with abelian
kernel. We then look at the group extensions that underlie
the truncated Witt vectors on the truncation set {1,p}
where p is a prime number. It turns out that we
can do without the multiplicative structure on the
source ring A by factoring the extension's representing
cocycle through a map into the p-fold tensor
product of A divided out by the C_p-action.
Mon, 05 Jan 2015 00:00:00 GMThttp://hdl.handle.net/1956/93472015-01-05T00:00:00ZTensor Induction as Left Kan Extension
http://hdl.handle.net/1956/9319
Tensor Induction as Left Kan Extension
Aye, Kaythi
Master thesis
Tue, 02 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/93192014-12-02T00:00:00ZWeak solutions and convergent numerical schemes of Brenner-Navier-Stokes equations
http://hdl.handle.net/1956/9230
Weak solutions and convergent numerical schemes of Brenner-Navier-Stokes equations
Svärd, Magnus
Research report
Lately, there has been some interest in modifications of the compressible
Navier-Stokes equations to include diffusion of mass. In this paper,
we investigate possible ways to add mass diffusion to the 1-D Navier-Stokes
equations without violating the basic entropy inequality. As a result, we recover
a general form of Brenner's modification of the Navier-Stokes equations.
We consider Brenner's system along with another modification where the viscous
terms collapse to a Laplacian diffusion. For each of the two modifications,
we derive a priori estimates for the PDE, suffciently strong to admit a weak
solution; we propose a numerical scheme and demonstrate that it satisfies the
same a priori estimates. For both modifications, we then demonstrate that the
numerical schemes generate solutions that converge to a weak solution (up to
a subsequence) as the grid is refined.
Tue, 20 Jan 2015 00:00:00 GMThttp://hdl.handle.net/1956/92302015-01-20T00:00:00ZModeling of oil reservoirs with focus on microbial induced effects
http://hdl.handle.net/1956/9207
Modeling of oil reservoirs with focus on microbial induced effects
Babatunde, Stanley Owulabi
Master thesis
As an abstract to this thesis, we review some literatures in EOR and discussed the processes, strength and weakness of Microbial Enhanced Oil Recovery techniques. A two phase flow model
comprising water and oil via the concept of mean pressure has been formulated using mass conservation equations, Darcy's law and constitutive relations. This resulted in a set of coupled nonlinear parabolic partial
differential equation with primary variables being the mean pressure and water saturation. We discretized these equations in one dimension using a control volume discretization scheme in space and implicit Euler
in time. We employed the IMPES approach which decoupled the primary variables. A model validation test was made by comparison with an analytical solution and with the Couplex-Gas benchmark. The model was used to
investigate two major mechanisms by which the activities of bacterial helps in enhancing the recovery of the residual oil.
Sat, 01 Nov 2014 00:00:00 GMThttp://hdl.handle.net/1956/92072014-11-01T00:00:00ZMathematical modelling of slow drug release from collagen matrices
http://hdl.handle.net/1956/9181
Mathematical modelling of slow drug release from collagen matrices
Erichsen, Birgitte Riisøen
Master thesis
This master's thesis is about controlled drug release, which is a relatively new area of mathematical modelling. In this thesis there have been two major focuses. The first is to further understand the model for drug release from collagen matrices developed earlier by solving it with a different numerical scheme, and the second to develop a new model based on a different geometry. Both models are based on mass conservation and Fick's law, and are therefore possible to compare. The two models have been discretized and implemented, and the results compared to experimental data.
Tue, 23 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/91812014-09-23T00:00:00ZCell-centered finite volume discretizations for deformable porous media
http://hdl.handle.net/1956/9014
Cell-centered finite volume discretizations for deformable porous media
Nordbotten, Jan Martin
Journal article
The development of cell-centered finite volume discretizations for deformation is motivated by the desire for a compatible approach with the discretization of fluid flow in deformable porous media. We express the conservation of momentum in the finite volume sense, and introduce three approximations methods for the cell-face stresses. The discretization method is developed for general grids in one to three spatial dimensions, and leads to a global discrete system of equations for the displacement vector in each cell, after which the stresses are calculated based on a local expression. The method allows for anisotropic, heterogeneous and discontinuous coefficients.
The novel finite volume discretization is justified through numerical validation tests, designed to investigate classical challenges in discretization of mechanical equations. In particular our examples explore the stability with respect to the Poisson ratio and spatial discontinuities in the material parameters. For applications, logically Cartesian grids are prevailing, and we also explore the performance on perturbations on such grids, as well as on unstructured grids. For reference, comparison is made in all cases with the lowest-order Lagrangian finite elements, and the finite volume methods proposed herein is comparable for approximating displacement, and is superior for approximating stresses.
Mon, 11 Aug 2014 00:00:00 GMThttp://hdl.handle.net/1956/90142014-08-11T00:00:00ZExplicit volume-preserving splitting methods for divergence-free ODEs by tensor-product basis decompositions
http://hdl.handle.net/1956/9006
Explicit volume-preserving splitting methods for divergence-free ODEs by tensor-product basis decompositions
Munthe-Kaas, Antonella Zanna
Journal article
We discuss the construction of volume-preserving splitting methods based on a tensor product of single-variable basis functions. The vector field is decomposed as the sum of elementary divergence-free vector fields (EDFVFs), each of them corresponding to a basis function. The theory is a generalization of the monomial basis approach introduced in Xue & Zanna (2013, BIT Numer. Math., 53, 265–281) and has the trigonometric splitting of Quispel & McLaren (2003, J. Comp. Phys., 186, 308–316) and the splitting in shears of McLachlan & Quispel (2004, BIT, 44, 515–538) as special cases. We introduce the concept of diagonalizable EDFVFs and identify the solvable ones as those corresponding to the monomial basis and the exponential basis. In addition to giving a unifying view of some types of volume-preserving splitting methods already known in the literature, the present approach allows us to give a closed-form solution also to other types of vector fields that could not be treated before, namely those corresponding to the mixed tensor product of monomial and exponential (including trigonometric) basis functions.
Sun, 23 Feb 2014 00:00:00 GMThttp://hdl.handle.net/1956/90062014-02-23T00:00:00ZSymmetric spaces and Lie triple systems in numerical analysis of differential equations
http://hdl.handle.net/1956/9002
Symmetric spaces and Lie triple systems in numerical analysis of differential equations
Munthe-Kaas, Hans; Quispel, Reinout; Zanna, Antonella
Journal article
A remarkable number of different numerical algorithms can be understood
and analyzed using the concepts of symmetric spaces and Lie triple systems, which
are well known in differential geometry from the study of spaces of constant curvature
and their tangents. This theory can be used to unify a range of different topics, such
as polar-type matrix decompositions, splitting methods for computation of the matrix
exponential, composition of selfadjoint numerical integrators and dynamical systems
with symmetries and reversing symmetries. The thread of this paper is the following:
involutive automorphisms on groups induce a factorization at a group level, and a
splitting at the algebra level. In this paper we will give an introduction to the mathematical
theory behind these constructions, and review recent results. Furthermore,
we present a new Yoshida-like technique, for self-adjoint numerical schemes, that
allows to increase the order of preservation of symmetries by two units. The proposed
techniques has the property that all the time-steps are positive.
Sat, 01 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/90022014-03-01T00:00:00ZOn the Formulation of Mass, Momentum and Energy Conservation in the KdV Equation
http://hdl.handle.net/1956/8968
On the Formulation of Mass, Momentum and Energy Conservation in the KdV Equation
Ali, Alfatih Mohammed A.; Kalisch, Henrik
Journal article
The Korteweg-de Vries (KdV) equation is widely recognized as a simple model
for unidirectional weakly nonlinear dispersive waves on the surface of a shallow body of
fluid. While solutions of the KdV equation describe the shape of the free surface, information
about the underlying fluid flow is encoded into the derivation of the equation, and the
present article focuses on the formulation of mass, momentum and energy balance laws in
the context of the KdV approximation. The densities and the associated fluxes appearing in
these balance laws are given in terms of the principal unknown variable η representing the
deflection of the free surface from rest position. The formulae are validated by comparison
with previous work on the steady KdV equation. In particular, the mass flux, total head and
momentum flux in the current context are compared to the quantities Q, R and S used in
the work of Benjamin and Lighthill (Proc. R. Soc. Lond. A 224:448–460, 1954) on cnoidal
waves and undular bores.
Wed, 01 Oct 2014 00:00:00 GMThttp://hdl.handle.net/1956/89682014-10-01T00:00:00ZElectrical conductivity of fractured media: A computational study of the self-consistent method
http://hdl.handle.net/1956/8923
Electrical conductivity of fractured media: A computational study of the self-consistent method
Sævik, Pål Næverlid; Berre, Inga; Jakobsen, Morten; Lien, Martha
Conference object
Effective medium theory can be used to link conductivity estimation
methods with prior knowledge about the distribution
of fractures in the investigated geological structure. In the literature,
little work has been presented on assessing the accuracy
of effective medium approximations for dense networks
of finite-sized fractures. We present here a systematic computational
study, comparing the conductivity predictions of
the popular self-consistent method with results from numerical
finite-element simulations. Our results show that the selfconsistent
method is accurate within acceptable error bounds
for a range of parameter values, in some cases even beyond
the percolation limit. We also compare the percolation thresholds
predicted by self-consistent theory with the thresholds obtained
by a numerical percolation algorithm. For the cases we
have studied, the percolation thresholds agree to a remarkable
degree.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/89232012-01-01T00:00:00ZFinite volume hydromechanical simulation in porous media
http://hdl.handle.net/1956/8911
Finite volume hydromechanical simulation in porous media
Nordbotten, Jan Martin
Journal article
Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media.
Tue, 27 May 2014 00:00:00 GMThttp://hdl.handle.net/1956/89112014-05-27T00:00:00ZControllability on Infinite-Dimensional Manifolds: A Chow-Rashevsky Theorem
http://hdl.handle.net/1956/8886
Controllability on Infinite-Dimensional Manifolds: A Chow-Rashevsky Theorem
Khajeh Salehani, Mahdi; Markina, Irina
Journal article
One of the fundamental problems in control theory is that of controllability, the question of whether one can drive the system from one point to another with a given class of controls. A classical result in geometric control theory of finite-dimensional (nonlinear) systems is Chow–Rashevsky theorem that gives a sufficient condition for controllability on any connected manifold of finite dimension. In other words, the classical Chow–Rashevsky theorem, which is in fact a primary theorem in subriemannian geometry, gives a global connectivity property of a subriemannian manifold. In this paper, following the unified approach of Kriegl and Michor (The Convenient Setting of Global Analysis, Mathematical Surveys and Monographs, vol. 53, Am. Math. Soc., Providence, 1997) for a treatment of global analysis on a class of locally convex spaces known as convenient, we give a generalization of Chow–Rashevsky theorem for control systems in regular connected manifolds modelled on convenient (infinite-dimensional) locally convex spaces which are not necessarily normable. To indicate an application of our approach to the infinite-dimensional geometric control problems, we conclude the paper with a novel controllability result on the group of orientation-preserving diffeomorphisms of the unit circle.
Tue, 29 Apr 2014 00:00:00 GMThttp://hdl.handle.net/1956/88862014-04-29T00:00:00ZMethods and Tools for Analysis of Symmetric Cryptographic Primitives
http://hdl.handle.net/1956/8828
Methods and Tools for Analysis of Symmetric Cryptographic Primitives
Kazymyrov, Oleksandr
Doctoral thesis
<p>The development of modern cryptography is associated with the emergence of computing machines. Since specialized equipment for protection of sensitive information was initially implemented only in hardware, stream ciphers were widespread. Later, other areas of symmetric and asymmetric cryptography were established with the invention of general-purpose processors. In particular, such symmetric cryptographic primitives as block ciphers, message authentication codes (MACs), authenticated ciphers and others began to develop rapidly. Today various cryptographic algorithms are commonly used in everyday life to protect private data.</p>
<p>Design and analysis of advanced symmetric cryptographic primitives require a lot of time and resources. This is related to many factors, mainly to the cryptanalysis of prospective encryption algorithms under development. Every year new and modified attacks are published, leading to a rapid increase in the quantity of requirements and criteria imposed on cryptoprimitives.</p>
<p>Most of this thesis is devoted to analysis and improvement of cryptographic attacks and corresponding criteria for basic components. Almost all modern cryptoprimitives use nonlinear mappings for protection against advanced attacks. In connection with that a new method was proposed for the generation of random substitutions (S-boxes) with extreme cryptographic indicators that can be used in the next-generation ciphers to provide high and ultra-high security levels. In addition, several criteria imposed on S-boxes used in block ciphers were analyzed and their significance for block ciphers was proven. It is worth mentioning a practical method of testing two vectorial Boolean functions and a universal tool for checking properties of arbitrary binary nonlinear components presented in papers gathered in this thesis.</p>
<p>Another part of the thesis is dedicated to the cryptanalysis of hash functions as well as block and stream ciphers. To be more precise, an algebraic attack based on a binary decision diagram (BDD) was performed on the reduced Data Encryption Standard (DES), a scaled-down version of Advanced Encryption Standard (AES) and extended affine (EA) equivalence problem. Moreover, an algebraic approach was used to reconstruct an initial representation of the current Russian hash standard GOST 34.11-2012. Finally, a backward states tree method has been used to analyze stream ciphers based on the combination principle of linear and nonlinear feedback registers.</p>
Mon, 01 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/88282014-12-01T00:00:00ZClassical and Stochastic Slit Löwner Evolution
http://hdl.handle.net/1956/8746
Classical and Stochastic Slit Löwner Evolution
Ivanov, Georgy
Doctoral thesis
Fri, 10 Oct 2014 00:00:00 GMThttp://hdl.handle.net/1956/87462014-10-10T00:00:00ZImproving the error rates of the Begg and Mazumdar test for publication bias in fixed effects meta-analysis
http://hdl.handle.net/1956/8611
Improving the error rates of the Begg and Mazumdar test for publication bias in fixed effects meta-analysis
Gjerdevik, Miriam; Heuch, Ivar
Journal article
<p>Background: The rank correlation test introduced by Begg and Mazumdar is extensively used in meta-analysis to test for publication bias in clinical and epidemiological studies. It is based on correlating the standardized treatment effect with the variance of the treatment effect using Kendall’s tau as the measure of association. To our knowledge, the operational characteristics regarding the significance level of the test have not, however, been fully assessed.</p>
<p>Methods: We propose an alternative rank correlation test to improve the error rates of the original Begg and Mazumdar test. This test is based on the simulated distribution of the estimated measure of association, conditional on sampling variances. Furthermore, Spearman’s rho is suggested as an alternative rank correlation coefficient. The attained level and power of the tests are studied by simulations of meta-analyses assuming the fixed effects model.</p>
<p>Results: The significance levels of the original Begg and Mazumdar test often deviate considerably from the nominal level, the null hypothesis being rejected too infrequently. It is proven mathematically that the assumptions for using the rank correlation test are not strictly satisfied. The pairs of variables fail to be independent, and there is a correlation between the standardized effect sizes and sampling variances under the null hypothesis of no publication bias. In the meta-analysis setting, the adverse consequences of a false negative test are more profound than the disadvantages of a false positive test. Our alternative test improves the error rates in fixed effects meta-analysis. Its significance level equals the nominal value, and the Type II error rate is reduced. In small data sets Spearman’s rho should be preferred to Kendall’s tau as the measure of association.</p>
<p>Conclusions: As the attained significance levels of the test introduced by Begg and Mazumdar often deviate greatly from the nominal level, modified rank correlation tests, improving the error rates, should be preferred when testing for publication bias assuming fixed effects meta-analysis.</p>
Mon, 22 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/86112014-09-22T00:00:00ZA study in Univalence
http://hdl.handle.net/1956/8566
A study in Univalence
Husebø, Anders Knarvik
Master thesis
In this master thesis we want to study the newly discovered homotopy type theory, and its
models within mathematics. We look at models in simplicial sets, simplicial symmetric monoids, and a new category which could be called multi pointed simplicial sets. We also describe dependent type theory from the informatical point of view, and some implications of it.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/85662014-06-02T00:00:00ZPortfolio Optimization with PCC-GARCH-CVaR model
http://hdl.handle.net/1956/8555
Portfolio Optimization with PCC-GARCH-CVaR model
Xi, Linda Mon
Master thesis
This thesis investigates the Conditional Value-at-Risk (CVaR) portfolio optimization approach combined with a univariate GARCH model and pair-copula constructions (PCC) to determine the optimal asset allocation for a portfolio.
The methodology focuses on minimizing CVaR as the risk measure in replacement of variance used in the traditional optimization framework of Markowitz. GARCH model provides a tool for predicting and analyzing the time-varying volatility financial assets are exposed to, while copulas allow us to model the non-linear dependence structure and margins separately.
We compare the performance of the CVaR optimized portfolio with other investment strategies such as Constant-Mix and Buy-and-Hold. Although the selection of strategy depends on the investor risk profile, it is empirically shown that the proposed CVaR optimized portfolio outperforms the other two investment strategies based on the accumulated wealth in the long run.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/85552014-06-02T00:00:00ZInvestigations of the Modified Navier-Stokes Equations in One Dimension
http://hdl.handle.net/1956/8554
Investigations of the Modified Navier-Stokes Equations in One Dimension
Sårheim, Inga Sofie
Master thesis
In this thesis we have investigated the Brenner-Navier-Stoke equations in one spatial dimension. These are a modified version of the Navier-Stokes equations, where the modification relates to a mass diffusion term. This modification would be significant for flows with high density gradients. In this case we will examine a shock wave problem in Argon. Both the original and the modified Navier-Stokes equations will be used to solve the conservation laws. We will study the effect of both entropy-stable and entropy-conservative schemes, in addition to several several different ways to model the diffusion parameters.
The solutions will be analyzed, and compared with experimental results.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/85542014-06-02T00:00:00ZContinuous Max-Flow for Image Segmentation with Shape Priors
http://hdl.handle.net/1956/8367
Continuous Max-Flow for Image Segmentation with Shape Priors
Kvile, Mari Aurlien
Master thesis
In this thesis we propose a stable method for image segmentation with shape priors. The original Chan-Vese intensity based segmentation model with regularisation term is extended to include shape prior information. We study shape priors which are pose invariant under the group of similarity transformations, that is under rotation, scaling and translation. In order to solve this problem robustly and effectively, an algorithm based on the theory of max-flow and min-cut is used in addition to a gradient descent procedure for updating the pose parameters. Comprehensive experiments are provided to demonstrate the behaviour of the proposed method on different images.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83672014-06-02T00:00:00ZFast Image Segmentation Using Variational Optimization Methods With Edge Detector
http://hdl.handle.net/1956/8366
Fast Image Segmentation Using Variational Optimization Methods With Edge Detector
Stanislaus, Mary Gerina
Master thesis
In this work, we apply techniques in variational optimization to image segmentation. We study three different segmentation models: one is based on the active contour method, the second is based on a piecewise constant level set method, and the last uses a continuous max-flow min-cut model. We obtain significantly better segmentation results in the first and the third model by including an experimental edge detector. The first model is a special case of the minimal partition problem, the second model uses discontinuities of piecewise constant level set functions to represent interfaces between the region of interest and the background, and the third model uses a spatially continuous max-flow min-cut framework which is a very efficient method to segment images. The first two models are non-convex and may contain many local solutions, but the last model is a convex optimization problem and therefore finds the global solution.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83662014-06-02T00:00:00ZExact and Superconvergent Solutions of the Multi-Point Flux Approximation O-method: Analysis and Numerical Tests
http://hdl.handle.net/1956/8353
Exact and Superconvergent Solutions of the Multi-Point Flux Approximation O-method: Analysis and Numerical Tests
Olderkjær, Daniel Stensrud
Master thesis
In this thesis we prove the multi-point flux approximation O-method (MPFA)
to yield exact potential and flux for the trigonometric potential functions
u(x,y)=sin(x)sin(y) and u(x,y)=cos(x)cos(y). This is done on uniform square grids
in a homogeneous medium with principal directions of the permeability aligned with
the grid directions when having periodic boundary conditions. Earlier theoretical
and numerical convergence articles suggests that these potential functions should
only yield second order convergence. Hence, our motivation for the analysis was to
gain new insight into the convergence of the method, as well as to develop
theoretical proofs for what seems as decent examples for testing implementation.
An extension of the result to uniform rectangular grids in an isotropic medium is also
briefly discussed, before we develop a numerical overview of the exactness phenomenon for
different types of boundary conditions. Lastly, an investigation of application of these
results to obtain exact potential and flux using the MPFA method for general potential
functions approximated by Fourier series was conducted.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83532014-06-02T00:00:00ZInvestigations of the Kaup-Boussinesq model equations for water waves
http://hdl.handle.net/1956/8335
Investigations of the Kaup-Boussinesq model equations for water waves
Juliussen, Bjørn-Sverre Hjelle
Master thesis
The Kaup-Boussinesq system is a coupled system of nonlinear partial differential equations which has been derived as a model for surface waves in the context of the Boussinesq scaling, and it has also been derived for an internal wave system. In this thesis, modeling properties of the Kaup-Boussinesq water-wave model are under investigation. Differential balance laws for mass, momentum and energy are considered, and we present an exact differential balance for momentum. A Kaup-Boussinesq system describing long internal waves is investigated and compared with the Gardner equation. Finally, a spectral method for the numerical discretization of the Kaup-Boussinesq system for surface waves is put forward, and shown to converge and be stable.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83352014-06-02T00:00:00ZRadielle basisfunksjoner som adaptiv kollokasjonsmetode
http://hdl.handle.net/1956/8333
Radielle basisfunksjoner som adaptiv kollokasjonsmetode
Nicolajsen, Tomas
Master thesis
Hensikten med oppgaven er å undersøke en adaptiv kollokasjonsmetode basert
på radielle basisfunksjoner. Målet er å finne ut om man med enkle kriterier
for senter- og formparameterfordelingen kan oppnå en fordeling som
gjenspeiler egenskaper ved problemet, og om man med det kan oppnå en mer
effektiv og stabil kollokasjonsmetode.
Sun, 01 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83332014-06-01T00:00:00ZBegrepsforståelse i matematikkfaget
http://hdl.handle.net/1956/8332
Begrepsforståelse i matematikkfaget
Vinnes, Eivind Ole
Master thesis
Intensjonen med denne oppgaven er å kunne gi en vurdering av hvilke matematiske basiskunnskaper elevene som begynner på videregående har og gi noen forslag til hva som kan gjøres for å forbedre undervisningen
Fri, 30 May 2014 00:00:00 GMThttp://hdl.handle.net/1956/83322014-05-30T00:00:00ZRekneark på ungdomstrinnet
http://hdl.handle.net/1956/8331
Rekneark på ungdomstrinnet
Johansson, Kari
Master thesis
Problemstillinga i oppgåva var korleis opplever elevane bruk av rekneark i matematikk på ungdomstrinnet. Gjennom eitt år vart ulike reknearkoppgåver prøvd ut i 9./10. klasse. I etterkant henta eg inn data ved hjelp av spørjeskjema, opptaksoppgåver av to og to elevar som løyste utvalde oppgåver og eit gruppeintervju. Eg la vekt på kva fordelar og problem elevane opplevde, og kva ulike typar oppgåver som er relevante for elevane.
Fri, 06 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83312014-06-06T00:00:00ZHybrids between common and Antarctic minke whales are fertile and can back-cross
http://hdl.handle.net/1956/8118
Hybrids between common and Antarctic minke whales are fertile and can back-cross
Glover, Kevin; Kanda, Naohisa; Haug, Tore; Pastene, Luis A.; Øien, Nils Inge; Seliussen, Bjørghild Breistein; Sørvik, Anne Grete Eide; Skaug, Hans J.
Journal article
<p>Background:
Minke whales are separated into two genetically distinct species: the Antarctic minke whale found in
the southern hemisphere, and the common minke whale which is cosmopolitan. The common minke whale is
further divided into three allopatric sub-species found in the North Pacific, southern hemisphere, and the North
Atlantic. Here, we aimed to identify the genetic ancestry of a pregnant female minke whale captured in the North
Atlantic in 2010, and her fetus, using data from the mtDNA control region, 11 microsatellite loci and a sex
determining marker.</p><p>Results:
All statistical parameters demonstrated that the mother was a hybrid displaying maternal and paternal
contribution from North Atlantic common and Antarctic minke whales respectively. Her female fetus displayed
greater genetic similarity to North Atlantic common minke whales than herself, strongly suggesting that the hybrid
mother had paired with a North Atlantic common minke whale.</p><p>Conclusion:
This study clearly demonstrates, for the first time, that hybrids between minke whale species may be
fertile, and that they can back-cross. Whether contact between these species represents a contemporary event
linked with documented recent changes in the Antarctic ecosystem, or has occurred at a low frequency over many
years, remains open.</p>
Mon, 15 Apr 2013 00:00:00 GMThttp://hdl.handle.net/1956/81182013-04-15T00:00:00ZReversible Jump Markov Chain Monte Carlo: Some Theory and Applications
http://hdl.handle.net/1956/8023
Reversible Jump Markov Chain Monte Carlo: Some Theory and Applications
Lyyjynen, Hannu Sakari
Master thesis
The history of MCMC, theories of Bayesian thinking
and model choice, the Accept-
Reject-algorithm, Markov chains, the Metropolis-
Hastings-algorithm and the reversible
jump MCMC are explained. Then the reversible jump
MCMC as change-point analysis
is applied to the coal mine disaster example,
familiar from [Green, 1995], and to the
examples of counting terrorism attacks (worldwide,
in Iraq and in Afghanistan). The
novel part is estimating the change points of the
hazard rate of terrorism attacks in
Afghanistan during the last 35 years.
Mon, 10 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/80232014-03-10T00:00:00ZGeological Storage of CO2: Sensitivity and Risk Analysis
http://hdl.handle.net/1956/7913
Geological Storage of CO2: Sensitivity and Risk Analysis
Ashraf, Meisam
Doctoral thesis
<p>Geological CO2 storage has the potential to be a key technology for prevention of industrial CO2 emission
into the atmosphere. A successful storage operation requires safe geological structures with large
storage capacity. The practicality of the technology is challenged by various operational concerns,
ranging from site selection to long-term monitoring of the injected CO2. The research in this report addresses
the value of using sophisticated geological modeling to help in predicting storage performance.</p><p>In the first part, we investigate the significance of assessing the geological uncertainty and its consequences
in site selection and the early stages of storage operations. This includes the injection period
and the early migration time of the injected CO2 plume. The extensive set of realistic geological realizations
used in the analysis makes the key part of this research. Heterogeneity is modelled using the
most influential geological parameters in a shallow-marine system, including aggradation angle, levels
of barriers in the system, faults, lobosity, and progradation direction.</p><p>A typical injection scenario is simulated over 160 realizations and major flow responses are defined
to measure the success of the early stages of CO2 storage operations. These responses include the
volume of trapped CO2 by capillarity, dynamics of the plume in the medium, pressure responses, and
the risk of leakage through a failure in the sealing cap-rock. The impact of geological uncertainty
on these responses is investigated by comparing all cases for their performance. The results show
large variations in the responses due to changing geological parameters. Among the main influential
parameters are aggradation angle, progradation direction, and faults in the medium.</p><p>A sophisticated geological uncertainty study requires a large number of detailed simulations that
are time-consuming and computationally costly. The second part of the research introduces a workflow
that employs an approximating response surface method called arbitrary polynomial chaos (aPC). The
aPC is fast and sophisticated enough to be used practically in the process of sensitivity analysis and
uncertainty and risk assessment. We demonstrate the workflow by combining the aPC with a global
sensitivity analysis technique, the Sobol indices, which is a variance-based method proven to be practical
for complicated physical problems. Probabilistic uncertainty analysis is performed by applying
the Monte Carlo process using the aPC. The results show that the aPC can be used successfully in an
extensive geological uncertainty study.</p>
Thu, 10 Apr 2014 00:00:00 GMThttp://hdl.handle.net/1956/79132014-04-10T00:00:00ZStatistiske metoder for alder-periode-kohort-analyser. En sammenligning av nyere metoder med konvensjonelle generaliserte lineære modeller
http://hdl.handle.net/1956/7795
Statistiske metoder for alder-periode-kohort-analyser. En sammenligning av nyere metoder med konvensjonelle generaliserte lineære modeller
Askeland, Olaug Margrete
Master thesis
Masteroppgaven tar for seg statistiske metoder for alder-periode-kohort-analyser (APC-analyser). For en gitt respons forsøker APC-analyser å separere den påvirkningen som skyldes alder, fra den påvirkningen som er assosiert med tidsperiode, og den påvirkning som er assosiert med fødselstidspunkt. Den velkjente sammenhengen mellom de tre faktorene, periode - alder = kohort, gjør parameterestimeringen vanskelig, og et generelt dilemma ved APC-analyser er problemstillingen med å separere de simultane effektene. Det har vært foreslått mange løsninger for dette identifikasjonsproblemet og i denne oppgaven sammenlignes nyere metoder mot konvensjonelle generaliserte lineære modeller. Sammenligningene blir gjort med simuleringsanalyser. Det er spesielt tatt utgangspunkt i en nyere metode, Intrinsic Estimator (IE) og sammenlignet denne med en mye benyttet metode, Constrained Generalized Linear Models estimator (CGLIM). CGLIM-metoden er avhengig av at det innføres en betingelse for koeffisientene, og problemet med denne metoden er at den avhenger av forhåndsinformasjon om dataene for å sette disse betingelsene. Estimatene til koeffisientene er sensitiv for valget av denne betingelsen. IE-metoden forsøker å oppnå modellidentifikasjon med minimale antagelser. Videre er det også foretatt en sammenligning med metoder som baserer seg på førsteordensdifferanser og andreordensdifferanser og metoder som inkluderer drift. Ved analyse av enkelte datasett trenger man ikke benytte den fulle APC-modellen, men det holder å benytte metoder som kun inkluderer en eller to av faktorene. Ulike goodness-of-fit-mål er da nyttig for å vurdere om en modell tilpasser et datasett godt nok. IE-metoden er også sammenlignet med slike metoder. En nyere metode som baserer seg på Partial Least Squares for å estimere de simultane effektene til alder, periode og kohort introduseres også.
IE-metoden har vist seg å være en nyttig tilnærming for identifikasjon og estimering i APC-modellen og produserer forventningsrette og effisiente estimater. Den er et sikrere valg når en ikke vet noe om de dataene en skal analysere, for i slike tilfeller kan valg av en vilkårlig betingelse i de andre metodene i verste fall gi estimater som er langt unna sannheten.
Wed, 20 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/77952013-11-20T00:00:00ZOptimization of CO2 Geological Storage Cost
http://hdl.handle.net/1956/7704
Optimization of CO2 Geological Storage Cost
Zhu, Sha
Master thesis
Carbon dioxide capture and storage (CCS) is a promising strategy to battle the climate change by injecting large-scale of carbon dioxide back to underground formations and storing the carbon dioxide possibly permanently. It is an existing technology but for climate and economic concern it is still a relevantly new concept. We are interested in studying the cost of CCS, in particular the cost of CO2 geological storage, and optimize the cost, since the high cost of CCS is a big hurdle for industry to deploy this technology.
Sun, 01 Dec 2013 00:00:00 GMThttp://hdl.handle.net/1956/77042013-12-01T00:00:00ZModeling and simulation of concrete carbonation in 1-d using two-point flux approximation
http://hdl.handle.net/1956/7689
Modeling and simulation of concrete carbonation in 1-d using two-point flux approximation
Røe, Tineke
Master thesis
In this master's thesis we will model concrete carbonation using mass conservation equations and Darcy's law. We then get a set of coupled partial differential equations. We also have a ordinary differential equation modeling porosity change. These equations are discretized using
two-point flux approximation for 1-d in space, and Euler implicit in time. We have a nonlinear pressure equation which is linearized using Newton method. We run simulations, and the results are compared to a constructed analytical solution and to examples in the literature.
Thu, 21 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/76892013-11-21T00:00:00ZNegativ binomisk regresjon med modifiserte sannsynligheter for nullobservasjoner; ZINB og ZANB
http://hdl.handle.net/1956/7686
Negativ binomisk regresjon med modifiserte sannsynligheter for nullobservasjoner; ZINB og ZANB
Optun, Mette
Master thesis
Regresjonsmodeller av Poisson- og negativ binomisk fordeling blir ofte brukt til å utføre regresjonsanalyse på datasett innen medisin, biologi, økonomi og mange andre fagfelt. Det oppstår ofte tilfeller der andelen av verdien null i datasettet ikke samstemmer med den som er forventet når det antas at observasjonene enten er negativ binomisk (NB)- eller Poissonfordelt. Et eksempel på slike kan være antall innrapporterte skademeldinger et år fra en gruppe forsikringstakere, som grunnet egenandel vil inneholde flere observasjoner av null enn forventet. Antall dager en pasient er innlagt på sykehus, der kun innlagte pasienter er tatt med i beregningen, er et annet eksempel på at forventet antall nullobservasjoner ikke vil være lik forventet antall fra NB- eller Poisson fordelte data.
Denne oppgaven vil ta for seg to modeller som er laget for å tilpasse datasett som er antatt negativ binomisk fordelt, men som inneholder mange nullobservasjoner i tillegg.
Før en introduksjon av disse modellene, er det valgt å inkludere en del teori om negativ binomisk fordelinger og regresjonsmodeller som tar utgangspunkt i disse. Dette vil gi et godt grunnlag for å forstå oppbyggingen i fordelingene bak, og til modellene ZINB og ZANB, når disse deretter vil bli grundig forklart. Til slutt vil det undersøkes om valg av de to modellene er avgjørende for resultat etter tilpassing av ulike datasett, og hvilke konsekvenser bruk av enten ZINB eller ZANB som regresjonsmodell når den andre er den korrekte kan gi. I tillegg vil det vurderes om det generelt er mulig å se hvilken modell som er mest riktig ut ifra egenskaper i det observerte datasettet.
Wed, 06 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/76862013-11-06T00:00:00ZVolume preserving numerical integrators for ordinary differential equations
http://hdl.handle.net/1956/7563
Volume preserving numerical integrators for ordinary differential equations
Xue, Huiyan
Doctoral thesis
Fri, 08 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/75632013-11-08T00:00:00ZGenerating function and volume preserving mappings
http://hdl.handle.net/1956/7562
Generating function and volume preserving mappings
Xue, Huiyan; Zanna, Antonella
Peer reviewed
In this paper, we study generating forms and generating functions
for volume preserving mappings in Rn. We derive some parametric classes
of volume preserving numerical schemes for divergence free vector fields. In
passing, by extension of the Poincar´e generating function and a change of
variables, we obtained symplectic equivalent of the theta-method for differential
equations, which includes the implicit midpoint rule and symplectic Euler A
and B methods as special cases.
Sat, 01 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/75622014-03-01T00:00:00ZHigh order volume preserving integrators for three kinds of divergence-free vector fields via commutators
http://hdl.handle.net/1956/7561
High order volume preserving integrators for three kinds of divergence-free vector fields via commutators
Xue, Huiyan
Journal article
In this paper, we focus on construction of high order volume preserving integrators
for divergence-free vector fields of the monomial basis, exponential basis and tensor
product of the monomial and exponential basis. We first prove that the commutators
of elementary divergence-free vector fields (EDFVF) of these three kinds are
still divergence-free vector fields of the same kind. For EDFVFs of these three kinds,
we construct high order volume preserving integrators using the multi-commutators.
Moreover, we consider ordering of EDFVFs and their commutators to reduce the error
of the schemes, showing by numerical tests that the strategies in [8] work well.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/75612013-01-01T00:00:00ZDetecting Periodic Elements in Higher Topological Hochschild Homology
http://hdl.handle.net/1956/7560
Detecting Periodic Elements in Higher Topological Hochschild Homology
Veen, Torleif
Doctoral thesis
Tue, 20 Aug 2013 00:00:00 GMThttp://hdl.handle.net/1956/75602013-08-20T00:00:00ZImage processing, filtering and segmentation of data-sets for reservoir simulation
http://hdl.handle.net/1956/7543
Image processing, filtering and segmentation of data-sets for reservoir simulation
Gurholt, Tiril Pedersen
Doctoral thesis
Paper A: 2-D Visualisation of Unstable Waterflood and Polymer Flood for Displacement
of Heavy Oil. Full-text not available in BORA. The published version is available at: <a href="http://dx.doi.org/10.2118/154292-ms" target="blank">http://dx.doi.org/10.2118/154292-ms</a>; Paper B: Determination of Connectivity in Vuggy Carbonate Rock Using Image
Segmentation Techniques. Full-text not available in BORA.; Paper C: Pore space characterization of vuggy carbonate rocks: A comparative
study of the performance of various image segmentation techniques. Full-text not available in BORA.; Paper D: 3D Multiphase Piecewise Constant Level Set Method Based on Graph
Cut Minimization. Full-text not available in BORA.
Fri, 11 Oct 2013 00:00:00 GMThttp://hdl.handle.net/1956/75432013-10-11T00:00:00ZModelling Fluid Flow and Heat Transport in Fractured Porous Media
http://hdl.handle.net/1956/7509
Modelling Fluid Flow and Heat Transport in Fractured Porous Media
Lampe, Victor
Master thesis
Flow in porous media is an important and well-
researched topic in both academia and the
industry. More accurate mathematical models are
constantly sought after.
Fractures in a porous medium can be an important
contributor
to the overall dynamics of a porous system, and
several conceptual models for fractured porous
media has been proposed
over the years.
We investigate a selection of these models, by
starting
with the basics of non-fractured porous media.
We discuss the effects of fractures, concerning
both single-phase fluid flow and heat transport,
and review
a few different approaches for the modelling of
fractured porous media.
Three different models, and equivalent continuum
model, a dual continuum model and a discrete
fracture model, are compared
through analysis and numerical experiments.
Our simulations show that the equivalent continuum
model is unreliable and coming up short to the
other two, from which we obtain comparable
results.
Tue, 27 Aug 2013 00:00:00 GMThttp://hdl.handle.net/1956/75092013-08-27T00:00:00ZAuxiliary variables for 3D multiscale simulations in heterogeneous porous media
http://hdl.handle.net/1956/7429
Auxiliary variables for 3D multiscale simulations in heterogeneous porous media
Sandvin, Andreas; Keilegavlen, Eirik; Nordbotten, Jan Martin
Journal article
<p>The multiscale control-volume methods for solving problems involving flow in porous media have gained
much interest during the last decade. Recasting these methods in an algebraic framework allows one to
consider them as preconditioners for iterative solvers. Despite intense research on the 2D formulation,
few results have been shown for 3D, where indeed the performance of multiscale methods deteriorates. The
interpretation of multiscale methods as vertex based domain decomposition methods, which are non-scalable
for 3D domain decomposition problems, allows us to understand this loss of performance.</p><p>We propose a generalized framework based on auxiliary variables on the coarse scale. These are enrichments
the coarse scale, which can be selected to improve the interpolation onto the fine scale. Where the
existing coarse scale basis functions are designed to capture local sub-scale heterogeneities, the auxiliary variables
are aimed at better capturing non-local effects resulting from non-linear behavior of the pressure field.
The auxiliary coarse nodes fits into the framework of mass-conservative domain-decomposition (MCDD)
preconditioners, allowing us to construct, as special cases, both the traditional (vertex based) multiscale
methods as well as their wire basket generalization.</p>
Mon, 01 Apr 2013 00:00:00 GMThttp://hdl.handle.net/1956/74292013-04-01T00:00:00ZA unified multilevel framework of upscaling and domain decomposition
http://hdl.handle.net/1956/7417
A unified multilevel framework of upscaling and domain decomposition
Sandvin, Andreas; Nordbotten, Jan Martin; Aavatsmark, Ivar
Conference object
We consider multiscale preconditioners for a class of mass-conservative
domain-decomposition (MCDD) methods. For the application of reservoir simulation,
we need to solve large linear systems, arising from finite-volume discretisations of elliptic
PDEs with highly variable coefficients. We introduce an algebraic framework, based on
probing, for constructing mass-conservative operators on a multiple of coarse scales. These
operators may further be applied as coarse spaces for additive Schwarz preconditioners.
By applying different local approximations to the Schur complement system based on a
careful choice of probing vectors, we show how the MCDD preconditioners can be both
efficient preconditioners for iterative methods or accurate upscaling techniques for the
heterogeneous elliptic problem. Our results show that the probing technique yield better
approximation properties compared with the reduced boundary condition commonly
applied with multiscale methods.
Presented at CMWR 2010 - XVIII International Conference on Computational Methods in Water Resources, June 21-24, 2010, Barcelona, Spain
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/74172010-01-01T00:00:00ZRobust Multiscale Control-volume Methods for Reservoir Simulation
http://hdl.handle.net/1956/7416
Robust Multiscale Control-volume Methods for Reservoir Simulation
Sandvin, Andreas
Doctoral thesis
Mon, 30 Apr 2012 00:00:00 GMThttp://hdl.handle.net/1956/74162012-04-30T00:00:00ZMass Conservative Domain Decomposition for Porous Media Flow
http://hdl.handle.net/1956/7415
Mass Conservative Domain Decomposition for Porous Media Flow
Nordbotten, Jan Martin; Keilegavlen, Eirik; Sandvin, Andreas
Chapter
Wed, 28 Mar 2012 00:00:00 GMThttp://hdl.handle.net/1956/74152012-03-28T00:00:00ZEnergieffektivitet i grunne geotermiske systemer. Modellering og analyse av systemet på Ljan skole.
http://hdl.handle.net/1956/7109
Energieffektivitet i grunne geotermiske systemer. Modellering og analyse av systemet på Ljan skole.
Straalberg, Eirik Ask
Master thesis
Grunnvarme er en stadig viktigere og mer anvendt energikilde i en verden med et økende energibehov, og det er i den forbindelse viktig å utforme systemer som kan utnytte energien på en effektiv måte. Denne oppgaven tar for seg det grunne og lukkede geotermiske anlegget på Ljan skole i Oslo, der en geotermisk varmepumpe forsynes med varme fra en geotermisk brønnpark og en asfaltert bakkesolfanger. Et hovedmål med denne oppgaven er å vurdere energektiviteten til dette systemet. Dette gjøres ved hjelp av simuleringsverktøyet TRNSYS, der det lages systemmodeller av anlegget. Spesielt studeres borehullsbasert lagring av solvarme, og det vurderes om dette er en energieffektiv løsning. For verifisering av modeller og resulater sammenlignes enkelte simuleringsresulater med driftsdata fra Ljan skole.
Det konkluderes med at systemet på Ljan skole er energieffektivt, og beregningene tyder på at det geotermiske systemet gir nesten halverte fyringskostnader sammenlignet med et elektrisk eller oljefyrbasert oppvarmingsystem. Systemet ser imidlertid ikke ut til å være tjent med borehullsbasert energilagring. Simuleringer uten energilagring gir enda høyere energieffektivitet enn simuleringer med energilagring, også på lang sikt.
Mon, 17 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/71092013-06-17T00:00:00ZSyntetisk CDO: iTraxx-prising ved bruk av en Normal Invers Gaussisk Copula
http://hdl.handle.net/1956/7076
Syntetisk CDO: iTraxx-prising ved bruk av en Normal Invers Gaussisk Copula
Nordanå, Kjetil Sørlien
Master thesis
I denne oppgaven skal vi se at det er et stort
marked for kredittderivater etter finanskrisen,
men da i en vridning mot syntetiske varianter. Men
framtidsutsiktene er usikre, da disse produktene
er i en
særstilling når det gjelder nye reguleringer.
Likevel fører kredittderivater med seg store
muligheter for å
flytte på risiko, slik at deltakere i markedet kan
bli avlastet endel i sine porteføljer. For
at markedsdeltakere som enten allerede er i en
eksisterende posisjon, eller vurderer
investeringer
på slike derivater, lettere kan gjøre opp
risikovurderinger er det viktig med modeller som
kan
reprodusere markedspriser med små avvik på en
tidseffektiv måte.
Gjennom oppgaven blir derfor tre modeller
presentert og prøvd ut: Gaussisk-LHP, Student
t-copula og NIG-LHP på den transjerte iTraxx
Europe-indeksen. Vi vil da se at det kun er
sistnevnte som på en effektiv måte reproduserer
markedspriser tilfredsstillende hvorpå den alltid
priser junior messanintransjen korrekt. Årsakene
til dette er de andre modellenens manglende
evne til å produsere korrelasjonssmilet observert
i markedet. I tillegg har dem kun en parameter,
korrelasjonen, som ikke er nok for å produsere en
slik kompleks struktur som syntetisk CDO
er. NIG har flere (intuitive) parametere i tillegg
til korrelasjonen og dermed føyer seg lettere til
markedspriser. I tillegg er NIG-modellen vi
presenterer semi-analytisk slik at den bruker kort
tid på å beregne. Gjennom oppgaven skal vi
utforske dens evne til å reprodusere markedspriser
for flere forskjellige tidsperioder, hvor vi
betraktet perioden under finanskrisen som svært
vanskelig, grunnet den lave likviditeten i
markedet. Med den pågående gjeldskrisen i de
europeiske landene, kan vi vente oss en stadig
ettersprøsel etter kredittderivater på europeiske
selskaper, slik at det er spennende å følge
utviklingen framover.
Thu, 04 Apr 2013 00:00:00 GMThttp://hdl.handle.net/1956/70762013-04-04T00:00:00ZSquare-free modules and ideals: Brill–Noether theory, polarizations, and deformations.
http://hdl.handle.net/1956/6998
Square-free modules and ideals: Brill–Noether theory, polarizations, and deformations.
Lohne, Henning
Doctoral thesis
Fri, 24 May 2013 00:00:00 GMThttp://hdl.handle.net/1956/69982013-05-24T00:00:00ZBrill–Noether theory of squarefree modules supported on a graph
http://hdl.handle.net/1956/6994
Brill–Noether theory of squarefree modules supported on a graph
Fløystad, Gunnar; Lohne, Henning
Journal article; Peer reviewed
We investigate the analogy between squarefree Cohen-
Macaulay modules supported on a graph and line bundles on a
curve. We prove a Riemann–Roch theorem, we study the Jacobian
and gonality of a graph, and we prove Clifford’s theorem.
Wed, 01 May 2013 00:00:00 GMThttp://hdl.handle.net/1956/69942013-05-01T00:00:00ZAnalysis of multi-beam sonar echos of herring schools by means of simulation
http://hdl.handle.net/1956/6967
Analysis of multi-beam sonar echos of herring schools by means of simulation
Holmin, Arne Johannes
Doctoral thesis
<p>The synchronized behavior of large schools of fish can be a fascinating sight and has
caught the attention of researchers for decades. Schools of thousands, or even millions of
fish can seemingly function as a single entity, by mechanisms that are still not entirely
understood. Models describing the individual behavior (individual based models) have
been shown to predict certain schooling features such as predator avoidance, whereas
experiments with fish schools in tanks have revealed rules governing the interaction between
neighboring fish. However, the step from an individual based model or a school
of a limited number of fish in a tank experiment, to large free swimming schools in the
ocean, may include challenges with regards to the observation techniques and to conditions
of the environment that are not necessarily reproduced in the individual based
model or in the tank.</p> <p>The latest technological advances in underwater acoustic observation has introduced
the potential of observing schools of fish of size up to a few hundred meters by threedimensional
images generated by multi-beam sonars. The Simrad MS70 multi-beam
sonar provides true three-dimensional images at a temporal resolution down to approximately
one image per second, enabling observations of the dynamic behavior of large
fish schools in situ, at a spatial and temporal resolution that has not been previously
available. In this thesis, data from the MS70 sonar are analyzed by means of simulation,
and steps are taken towards establishing a link between the modeled behavior of an individual
and the observed behavior of real fish schools.</p> <p>The principal analytical tool utilized in the thesis is a simulation model developed by the
author (and co-authors), which simulates observations from multi-beam sonars based on
the positions, orientations, and acoustical properties of arbitrary groups of individual
fish, and the configuration and acoustical properties of the sonar and the environment.
The framework of the simulation model is presented in the first of three papers in the
thesis, along with examples of its use as implemented for the EK60 multi-frequency
echosounder, the ME70 multi-beam echosounder, and the MS70 multi-beam sonar, all
manufactured by Simrad. The simulation experiments shown in the first paper illustrate
for example the potential of the MS70 sonar to provide information about the behavior
of fish schools. Specifically, in one of the experiments, a herring school with original
mean heading perpendicular to the central sonar beams is modified to represent eight
different idealized orientation scenarios, obtained by rotating the fish in specific sections
of the school by 90° towards the sonar. This produced reduced backscatter in the sections
of the school which were rotated, due to the directionality of the scattering from
herring at the acoustic frequencies of the sonar, showing the potential for falsely interpreting
orientation changes as fish density changes, but also the potential for extracting information about the local orientation distribution of the school.</p> <p>In the second paper, the parameters and stochastic properties of the noise (defined
as all contributions to the received acoustic intensity not backscattered from targets)
present in data from the MS70 sonar and EK60 echosounder are estimated from passive
recording sequences, motivated by the following three potential uses: (1) subtraction
of noise from real data, (2) simulation of noise in synthetic data, and (3) to provide a
basis for the development of methods for segmentation of MS70 data of fish schools (applied
in the third paper). Particularly, a simulation experiment from the first paper is
repeated with addition of noise, in which the polarization (degree of alignment of the
individual fish) of a real school which had been circumnavigated by the vessel for several
rounds, was estimated by matching the total backscatter of the real school and the
total backscatter of simulated schools with a variety of polarizations and packing densities.
The experiment illustrates the effect of noise on the estimated polarization, and
the potential to infer packing density of the school from the simulation experiment.</p> <p>The estimated noise from the second paper is utilized in the third and final paper, in
the development and testing of a new segmentation method for multi-beam sonar data,
which is compared to an existing segmentation method implemented in the post processing
system LSSS (Large Scale Survey System). The new method applies a Bayesian
approach, where the cumulative distribution function (CDF) of the packing density of
omnidirectional targets (scattering equally in all directions) in a voxel is estimated based
on the observed data, the estimated noise, and a prior probability distribution of the
signal. The CDF is evaluated at a specified lower schooling threshold, and the resulting
probabilities that the packing density is below the schooling threshold is smoothed by a
Gaussian kernel in the logarithmic domain (implying products of the probabilities in the
neighborhood around the voxel). Voxels for which the smoothed probability is below a
segmentation threshold, are identified to contain the school. The two segmentation methods
are tested on simulated data of 240 herring schools of various shapes, sizes, packing
densities, and depths, and compared with ground truth segmentation data generated
from the fish positions used as input to the simulation model to identify recommended
parameter settings for both methods, and to determine differences in the performance
between the methods. The new method is shown to produce estimates of the school extent,
total volume, total target strength (total echo), and mean volume backscattering
strength (mean echo density) which are generally closer to the corresponding theoretical
values estimated from the ground truth segmentation data.</p>
Fri, 31 May 2013 00:00:00 GMThttp://hdl.handle.net/1956/69672013-05-31T00:00:00ZSimulations of multi-beam sonar echos from schooling individual fish in a quiet environment
http://hdl.handle.net/1956/6958
Simulations of multi-beam sonar echos from schooling individual fish in a quiet environment
Holmin, Arne Johannes; Handegard, Nils Olav; Korneliussen, Rolf J.; Tjøstheim, Dag
Journal article; Peer reviewed
A model is developed and demonstrated for simulating echosounder and sonar observations of fish
schools with specified shapes and composed of individuals having specified target strengths and
behaviors. The model emulates the performances of actual multi-frequency echosounders and
multi-beam echosounders and sonars and generates synthetic echograms of fish schools that can be
compared with real echograms. The model enables acoustic observations of large in situ fish
schools to be evaluated in terms of individual and aggregated fish behaviors. It also facilitates
analyses of the sensitivity of fish biomass estimates to different target strength models and their
parameterizations. To demonstrate how this tool may facilitate objective interpretations of
acoustically estimated fish biomass and behavior, simulated echograms of fish with different spatial
and orientation distributions are compared with real echograms of herring collected with a multibeam
sonar aboard the research vessel “G.O. Sars.” Results highlight the important effects of
fish-backscatter directivity, particularly when sensing with small acoustic wavelengths relative to
the fish length. Results also show that directivity is both a potential obstacle to estimating fish
biomass accurately and a potential source of information about fish behavior.
Sat, 01 Dec 2012 00:00:00 GMThttp://hdl.handle.net/1956/69582012-12-01T00:00:00ZComputer-aided proofs and algorithms in analysis
http://hdl.handle.net/1956/6817
Computer-aided proofs and algorithms in analysis
Bartha, Ferenc A.
Doctoral thesis
<p>The computational power has increased dramatically since the appearance of the first
computers, making them a vital tool in the analysis of dynamical systems. We present
further applications of those two basic ideas, namely interval arithmetic and automatic
differentiation, that address the question of the reliability of the results and the difficulty
of calculating derivatives.</p>
<p>In general, the result of a numerical calculation will be influenced by errors, since
the set of the numbers represented by the machine is finite. This will inevitably lead
to round-off and truncation errors. This should not be considered as a problem, but
rather as the true nature of numerics. The notorious examples like evaluating 333.75y6+
x2(11x2y2−y6−121y4−2)+5.5y8+x/(2y) at (x, y) = (77617,33096) or plotting the
polynomial t6 −6t5 +15t4 −20t3 +15t2 −6t +1 in a small neighborhood of 1, still
result in unexpected outcomes, if one is unaware of the potential risks of the floating
point computations. We mention the failure of a Patriot missile on February 25, 1991
or the explosion of the unmanned space rocket Ariane 5 on June 4, 1996 as practical
examples of these potential risks becoming real.</p>
<p>Therefore in mathematical proofs, where the beauty of the argument is its unquestionable
truth itself, the usage of computers must be handled with extreme care. One
technique, that is used to overcome these problems and make our computations rigorous,
is called interval arithmetic.</p>
<p>To calculate derivatives of a given function is often considered to be a hard problem,
since in general with increasing the order or the dimension, the complexity of the formula
of the derivative grows exponentially. The observation, that we do not need these
formulae in general, but only certain values of the derivatives, is crucial to understand
why automatic differentiation is so useful.</p>
<p>The structure of the thesis is as follows. In Part I we give an introduction to the
methods used in our papers. In Chapter 1 we get acquainted with the basic techniques,
interval arithmetic, interval analysis, floating point computations and automatic differentiation.
Chapter 2 gives an overview of the interaction between dynamical systems and different representations of the data. In Chapter 3 we take on the basic concept of automatic differentiation seen before, and present a method by Griewank et al. [17] to compute higher order derivatives of multivariate functions that will be used in Paper A. We go through the theory of graph representations in Chapter 4 by following the steps of Hohmann and Dellnitz [12] and Galias [15]. This theory may be used in qualitative analysis of maps. We give two applications in Paper B and Paper C. In addition, we
give the proof of correctness of the algorithm for enclosing non-wandering points in Paper B. In Chapter 5 we introduce the reader to the method of self-consistent bounds by
Zgliczy´nski and Mischaikow [44] and Zgliczy´nski [40, 42, 43] that may be used to analyze
a certain class of dissipative partial differential equations. An application of this
concept to a destabilized Kuramoto-Sivashinsky equation is given in Paper D. Chapter 6
gives a short overview of the results of the included papers.
Part II is the main scientific contribution of this thesis, consisting of the formerly
mentioned four papers.</p>
Fri, 14 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/68172013-06-14T00:00:00ZIll Posedness Results for Generalized Water Wave Models
http://hdl.handle.net/1956/6802
Ill Posedness Results for Generalized Water Wave Models
Teyekpiti, Vincent Tetteh
Master thesis
In the first part of the study, the weak asymptotic method is used to find singular solutions of the shallow water system in both one and two space dimensions. The singular solutions so constructed are allowed to contain Dirac-delta; distributions (Espinosa & Omel'yanov, 2005). The idea is to con- struct complex-valued approximate solutions which become real-valued in the distributional limit. The approach, which extends the range f possible singular solutions, is used to construct solutions which contain combinations of hyperbolic shock waves and Dirac-delta; distributions. It is shown in the second part that the Cauchy problem for Korteweg-de Vries (KdV) type equations is locally ill-posed in a negative Sobolev space. The method is used to construct a solution which does not depend continuously on its initial data in H^{s_epsilon}, s_epsilon = -1/2 - epsilon
Mon, 03 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/68022013-06-03T00:00:00ZThe Norwegian Stock Market: - A Local Gaussian Perspective
http://hdl.handle.net/1956/6787
The Norwegian Stock Market: - A Local Gaussian Perspective
Lura, Andreas
Master thesis
In this thesis, using daily returns from 18
stocks, oil price, exchange rates and the main
index of the Oslo Stock Exchange over a period of
5 years, we investigate how the Local Gaussian
Correlation can be used
to describe the change in the relationship between
stocks and the market and how it can extend
already established theory
in finance.
Topics covered in this thesis are; risk estimation
by conventional risk measures and a method based
on Local Gaussian Correlation, the Capital Asset
Pricing Model (CAPM), copulas and GARCH as a
description of volatility and as a description of
the marginal distributions for copulas.
Value at Risk and Expected Shortfall are well
established risk measurements in finance.
They are dependent on a good description for the
distribution in the tail, which can be
challenging. These measures only provide one
single number as a description of the risk, this
might be appealing, but does not really provide
detailed information.
By using the theory of CAPM there has been some
attempt to describe the change in risk by using
the so-called conditional moments of the
observations. This approach might be biased, as
the conditional moment
fails to describe the constant correlation and
variances of the Gaussian distribution.
By rather using the local parameters found when
calculating the Local
Gaussian Correlation as a local description of the
beta on our data, there seem to be higher risk in
the upper tail and the lower than in the middle.
However, what differs from the results found by
the previously mentioned approach is that the risk
in the upper tail seems to be higher than in the
lower. This might be explained by very large gains
for the stock market might be followed by a
possible stock market downturn or
even a crash (bubble), while negative values for
the market is less likely to resolve in a sudden
positive boost for the market.
Mon, 03 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/67872013-06-03T00:00:00ZDCC-GARCH modeller med ulike avhengighetsstrukturer
http://hdl.handle.net/1956/6786
DCC-GARCH modeller med ulike avhengighetsstrukturer
Aardal, Helene
Master thesis
Hovedfokuset i denne oppgaven er å finne gode metoder for modellering av volatilitet og avhengighetsstruktur i finansielle porteføljer. En spesifikk multivariat GARCH modell, Dynamic Conditional Correlation (DCC-) GARCH, kombineres med copulaer og par-copula-konstruksjoner for å få en mer fleksibel modell til å modellere nettopp dette.
GARCH-modeller er verktøy for å predikere og analysere volatiliteten i tidsrekker når denne varierer over tid, mens copulaer gir mulighet til å modellere avhengighetsstruktur og marginaler hver for seg.
Flere DCC-GARCH-modeller med ulike avhengighetsstrukturer implementeres i oppgaven; En Copula-DCC-GARCH-modell med multivariat Student t copula, en PCC-DCC-GARCH-modell med Student t-copula for alle par av variable og en PCC-DCC-GARCH-modell der par-copula-konstruksjonen består av både Clayton og Student t copulaer.
Thu, 14 Mar 2013 00:00:00 GMThttp://hdl.handle.net/1956/67862013-03-14T00:00:00ZSemiparametric model selection for copulas
http://hdl.handle.net/1956/6778
Semiparametric model selection for copulas
Jordanger, Lars Arne
Master thesis
This thesis will consider the performance of the cross-validation copula information criterion, xv-CIC, in the realm of finite samples.
The theory leading to the xv-CIC will be outlined, and an analysis will be conducted on an assorted collection of bivariate one-parameter copula models. The restriction to the bivariate case is not a grave one, since more complex d-variate samples can be broken down into a study of conditioned bivariate samples, by the methodology of regular vine-copulas, the pair copula construction and stepwise-semiparametric estimation of parameters.
As a by-product of our analysis, we can give an advice with regard to the selection of model selection method in the semiparametric realm.
Mon, 03 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/67782013-06-03T00:00:00ZGenerated Sound Waves in Corrugated Pipes
http://hdl.handle.net/1956/6693
Generated Sound Waves in Corrugated Pipes
Lillejord, Arve
Master thesis
This paper studies sound waves generated in corrugated pipes by using computational fluid dynam-
ics. The author did not have any prior experience in computational fluid dynamics and therefore
a substantial part was dedicated to theory in computational fluid dynamics. On this background,
this paper aimed to develop the necessary boundary conditions and solver settings in order to
simulate the problem using OpenFOAM. The main result consists of two cases, where the first
case studied the flow over a single corrugation and the second case a study of the flow through a
corrugated pipe. It was found that the first case was able to predict frequencies matching previous
work. The second case was not able to produce the features connected to singing in corrugated
pipes. This is largely attributed to how the inlet boundary condition was set.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/66932012-06-01T00:00:00ZMisoppfatningar i algebra på ungdomsskulen - ei diagnostisk tilnærming
http://hdl.handle.net/1956/6692
Misoppfatningar i algebra på ungdomsskulen - ei diagnostisk tilnærming
Hompland, Anne Berit
Master thesis
Denne oppgåva handlar om elevar i 10.klasse sitt arbeid med algebra. Meir spesifikt har eg
tatt utgangspunkt i ei kvalitativ tilnærming for å undersøka misoppfatningane dei har innan
emnet. Vidare har eg sett på korleis ein med dette utgangspunktet kan handsama misoppfatningane i undervisinga ved hjelp av diagnostisk arbeidsmetode. Med desse måla for
arbeidet valde eg å gjera eigne undersøkingar for å skaffa materiale.
Konstruktivistiske læringsteoriar utgjer det teoretiske rammeverket for oppgåva. Meir
spesifikt omhandlar det omgrep som misoppfatningar, diagnostiske oppgåver og diagnostisk undervising. Med dette som bakgrunn har eg tatt utgangspunkt i tidlegare forsking om
misoppfatningar innan algebra og nytta det i mine eigne undersøkingar.
Sjølv om det kvalitative aspektet var motivasjonen for å gå i gang med undersøkingane, valde eg å nytta metodetriangulering fordi det gav ei meir heilskapleg tilnærming til det eg ville undersøka. Dei kvalitative delane var intervju med tre elevar og diagnostisk undervising i ei gruppe på 14 elevar. Før dette hadde eg hatt ei kvantitativ kartleggingsprøve på heile 10.trinn. I etterkant gjennomførte eg ein ny kartleggingsprøve på gruppa med 14 elevar.
Resultata, både frå kartleggingsprøvane og intervjua, synte ei elevgruppe med store
utfordringar innan algebra, noko som gjenspeglar det både PISA- og TIMSS-undersøkingar frå tidlegare år alt har peika på. Tidlegare dokumenterte misoppfatningar, både frå norske og
utanlandske undersøkingar, synte seg hjå desse elevane. I arbeidet med diagnostisk
undervising retta eg fokuset direkte inn mot misoppfatningane for om mogeleg å bearbeida
dei. Kartleggingsprøva i etterkant synte eit svakt resultat med tanke på forbetringar. Men som
drøftinga i samband med dette syner, kan det ha andre orsakar enn akkurat mi tilnærming til
diagnostisk arbeid med algebra.
Avslutningsvis peiker eg på tre punkt som eg ser på som hovudutfordringar i arbeidet med
misoppfatningar i algebra; det aritmetiske grunnlaget, forståing av bokstavsymbola og elevane
sin bruk av intuitive strategiar. Sjølv om utvalet mitt ikkje er stort nok til å seia om dette er generelle tendensar, meiner eg å ha grunnlag for å seia at dette er utfordringar som er viktige for matematikklærarar å kjenna til.
Thu, 24 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66922012-05-24T00:00:00ZNivådeling i matematikk på ungdomsskolen - en kvalitativ undersøkelse
http://hdl.handle.net/1956/6691
Nivådeling i matematikk på ungdomsskolen - en kvalitativ undersøkelse
Tindeland, Anita
Master thesis
I Kunnskapløftet, K06, legges det stor vekt på tilpasset opplæring til hver enkelt elev. Flere ungdomsskoler har innført nivådeling i matematikk som et tiltak for å imøtekomme dette kravet. I Norge står enhetsskoletanken sterkt, og nivådeling er et svært omstridt tema. Både forskere og politikere har delte meninger på området. Jeg har, ved hjelp av det kvalitative forskningsintervjuet, sett nærmere på hvordan nivådeling gjennomføres på to skoler i bergensområdet. Problemstillingen jeg har holdt meg til er: Hvilke konsekvenser har det på det sosiale, personlige og organisatoriske plan, å nivådele matematikkundervisningen på ungdomstrinnet? I arbeidet med oppgaven har jeg gjort svært mange og komplekse funn. Jeg vil derfor ikke komme med et endelig svar for eller mot nivådeling. I teksten belyser jeg hvilke positive og negative konsekvenser nivådeling har for elever og læreres skolehverdag. Her har jeg blant annet sett nærmere på elever og læreres oppfatning av ordningens innvirkning på mobbing, sosial status, motivasjon, mestring og forsvarstrategier, organisering av gruppeinndeling og undervisning. Jeg finner at det er et forbedringspotensiale ved de ordningene jeg har undersøkt. Spesielt gjelder dette hvordan undervisningen legges opp på de ulike gruppene, og hvordan mulighetene for tilpasning og variasjon utnyttes. Vellykket nivådeling - om så finnes - avhenger av en rekke faktorer; elevmassen, lærerne, tilgjengelige ressurser og hvilke undervisningsmetoder som brukes, for å nevne noe. I denne studien har jeg belyst noen av konsekvensene av en slik ordning ved to spesifikke skoler. Jeg finner at det er både positive og negative sider ved de undersøkte ordningene, og jeg presenterer avslutningsvis noen forbedringsforslag som muligens kan bidra til å gjøre dem enda bedre.
Wed, 30 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66912012-05-30T00:00:00ZAdaptive parameterization of electric conductivity in inversion of electromagnetic data
http://hdl.handle.net/1956/6687
Adaptive parameterization of electric conductivity in inversion of electromagnetic data
Ek, Torbjørn Helland
Master thesis
We describe a methodology developed for 3-D parameter identification, with focus on large-scale applications such as monitoring subsea oil production and geothermal systems. The methodology is designed to handle challenges related to low parameter sensitivity, nonuniqueness of the inverse solutions and costly numerical calculations. A reduced, composite parameter representation is chosen to meet these challenges. Our contributions to the methodology involves choosing a reduced representation with radial basis functions, to maintain a low number of parameters. We also propose the use of a 1. order selection measure in the refinement process to reduce the computational costs. The performance of the proposed changes in the methodology is illustrated in a series of examples for estimating the change in electric conductivity from time-lapse electromagnetic observations. The results show some limitetions regarding the accuracy of the 1. order selection measure. For the investigated numerical examples, radial basis functions, together with the described methodology, effectively provide an estimated of the electric conductivity field using electromagnetic measurements.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/66872012-06-01T00:00:00ZGonality of Points in Brill-Noether Loci of the Moduli Space of Curves
http://hdl.handle.net/1956/6686
Gonality of Points in Brill-Noether Loci of the Moduli Space of Curves
Sæbø, Gard
Master thesis
In this masterthesis we study the geometry of the
points in the Brill-Noether locus. Typically, we
want to study the gonality of points in this
locus, that can be seen as projective curves. We
also study what curves on K3s, which corresponds
to smooth curves in the Hilbert scheme.; I denne masteroppgaven studerer vi geometrien til
punktene i Brill-Noether lokuset. Et typisk
problem er å finne gonaliteten til disse punktene,
som kan bli sett på som projektive kurver. Vi skal
også studere hvilke kurver på K3 flater som
korresponderer til glatte punkter i
Hilbertskjemaet.
Thu, 31 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66862012-05-31T00:00:00ZImpact of compressibility in vertically integrated models for CO2 storage
http://hdl.handle.net/1956/6685
Impact of compressibility in vertically integrated models for CO2 storage
Reistad, Silje Rognsvåg
Master thesis
Recent work have shown that simplified models obtained by vertical integration give reasonable approximations for CO2 migration. Most of the work previously done on this topic assumes that CO2 is an incompressible fluid, or they work with a quasi compressibility, where they consider horizontal compressibility but not vertical. However, CO2 is highly compressible and large variations in CO2 density can be expected due to pressure increases near the injection well. In addition, long term CO2 migration to shallower depths can result in significantly lower density as dictated by geothermal gradients and regional pressure conditions. In this study, we outline the mathematical and numerical approach to solve the coupled vertically integrated models with density variation. Mathematical models for CO2 storage are considered in which the equations for mass conservation and Darcy's law are vertically integrated and coupled with an equation of state for CO2 density. Based on these, a pressure equation is derived. This pressure equation is then discretised with the use of an implicit control volume method and a partially implicit Lax-Friedrichs' method. A solution approach based on an IMPES approach is suggested.
We have unfortunately not been able to solve this problem, as there has been some difficulties with the boundary values in the code, but we suggest a plan for further work for a working code.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/66852012-06-01T00:00:00ZCritical Measures and Parameter Space of Jenkins-Strebel Quadratic Differentials
http://hdl.handle.net/1956/6684
Critical Measures and Parameter Space of Jenkins-Strebel Quadratic Differentials
Frolova, Anastasia
Master thesis
The thesis is devoted to applications of the
theory of quadratic differentials
to the problems of construction
of critical measures, that provide critical values
of the logarithmic energy of a charge system on
the complex plane. We observe the properties of
supports of such measures. We consider the problem
of describing parameter space of Jenkins-Strebel
differentials, which is related to the problem of
describing critical measures.
Mon, 07 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66842012-05-07T00:00:00ZVascular responses to radiotherapy and androgen-deprivation therapy in experimental prostate cancer
http://hdl.handle.net/1956/6664
Vascular responses to radiotherapy and androgen-deprivation therapy in experimental prostate cancer
Røe, Kathrine; Mikalsen, Lars T. G.; Kogel, Albert J. van der; Bussink, Johan; Lyng, Heidi; Olsen, Dag R.
Peer reviewed; Journal article
<p>Background: Radiotherapy (RT) and androgen-deprivation therapy (ADT) are standard treatments for advanced prostate cancer (PC). Tumor vascularization is recognized as an important physiological feature likely to impact on both RT and ADT response, and this study therefore aimed to characterize the vascular responses to RT and ADT in experimental PC.</p> <p>Methods: Using mice implanted with CWR22 PC xenografts, vascular responses to RT and ADT by castration were visualized in vivo by DCE MRI, before contrast-enhancement curves were analyzed both semi-quantitatively and by pharmacokinetic modeling. Extracted image parameters were correlated to the results from ex vivo quantitative fluorescent immunohistochemical analysis (qIHC) of tumor vascularization (9 F1), perfusion (Hoechst 33342), and hypoxia (pimonidazole), performed on tissue sections made from tumors excised directly after DCE MRI.</p> <p>Results: Compared to untreated (Ctrl) tumors, an improved and highly functional vascularization was detected in androgen-deprived (AD) tumors, reflected by increases in DCE MRI parameters and by increased number of vessels (VN), vessel density ( VD), and vessel area fraction ( VF) from qIHC. Although total hypoxic fractions ( HF) did not change, estimated acute hypoxia scores ( AHS) – the proportion of hypoxia staining within 50 μm from perfusion staining – were increased in AD tumors compared to in Ctrl tumors. Five to six months after ADT renewed castration-resistant (CR) tumor growth appeared with an even further enhanced tumor vascularization. Compared to the large vascular changes induced by ADT, RT induced minor vascular changes. Correlating DCE MRI and qIHC parameters unveiled the semi-quantitative parameters area under curve ( AUC) from initial time-points to strongly correlate with VD and VF, whereas estimation of vessel size ( VS) by DCE MRI required pharmacokinetic modeling. HF was not correlated to any DCE MRI parameter, however, AHS may be estimated after pharmacokinetic modeling. Interestingly, such modeling also detected tumor necrosis very strongly.</p> <p>Conclusions: DCE MRI reliably allows non-invasive assessment of tumors’ vascular function. The findings of increased tumor vascularization after ADT encourage further studies into whether these changes are beneficial for combined RT, or if treatment with anti-angiogenic therapy may be a strategy to improve the therapeutic efficacy of ADT in advanced PC.</p>
Wed, 23 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66642012-05-23T00:00:00ZMultiscale simulation of flow and heat transport in fractured geothermal reservoirs
http://hdl.handle.net/1956/6593
Multiscale simulation of flow and heat transport in fractured geothermal reservoirs
Sandve, Tor Harald
Doctoral thesis
Mon, 04 Mar 2013 00:00:00 GMThttp://hdl.handle.net/1956/65932013-03-04T00:00:00ZLocal Likelihood
http://hdl.handle.net/1956/6583
Local Likelihood
Otneim, Håkon
Master thesis
Methods for probability density estimation are
traditionally classified as either parametric or
non-parametric. Fitting a parametric model to
observations is generally a good idea when we have
sufficient information on the origin of our data;
if not, we must turn to non-parametric methods,
usually at the cost of poorer performance.
This thesis discusses local maximum likelihood
estimation of probability density functions, which
can be regarded as a compromise between the two
mindsets. The idea is to fit a parametric model
locally, that is, to let the parameters and their
estimates depend on the location. If the chosen
model is close to the true, unknown density, we
keep much of the appealing properties of a full
parametric approach. On the other hand, local
likelihood density estimates have performance
comparable to well known non-parametric methods,
even though the locally fitted parametric model
differs from the true density in a global sense.
Although traditional methods withstand the test of
time as excellent options in many situations, the
local maximum likelihood estimator opens up a
range of applications. Hjort and Jones [1996], who
will serve as the main reference for this thesis,
call it semi-parametric density estimation, as it
is particularly useful when we have partial
knowledge on the shape of the unknown density, but
not enough to trust the ordinary, global maximum
likelihood estimates. Further, many have built on
the idea of locally parametric estimation to
applications beyond just density estimation, some
of whom have been mentioned and included as
references throughout the thesis.
One-dimensional density estimation will, however,
be the primary focus here, with particular
emphasis on large sample theory. The main results
concern asymptotic bias, which is shown to have a
larger order than the bias of traditional kernel
estimation as the sample size increases to
infinity, and the bandwidth decreases towards
zero. Nonetheless, in practical situations with
reasonable sample sizes, the local likelihood
estimator is shown to perform very well, with an
appealing robustness against under- and
oversmoothing. Indeed, no experiment performed
show signs of deterioration of local likelihood
estimates compared to kernel estimation as the
sample size grows.
Mon, 23 Apr 2012 00:00:00 GMThttp://hdl.handle.net/1956/65832012-04-23T00:00:00ZGAMLSS-modeller i bilforsikring
http://hdl.handle.net/1956/6582
GAMLSS-modeller i bilforsikring
Røyrane-Løtvedt, Hallvard
Master thesis
I denne oppgaven tester jeg ulike modeller for prediksjon av total skadeutbetaling fra forsikringsselskap til forsikringstaker i et poliseår. Modellene som testes hører til rammeverket Generalized Additive Models for Location, Shape and Scale - GAMLSS - introdusert av Rigby og Stasinopoulos (2001). Data brukt i oppgaven er hentet fra et norsk forsikringsselskap, og består av informasjon om poliser og skader i bilforsikring i årene 2000-2005. Ved hjelp av kun 3 forklaringsvariabler; årstall, bilalder og personalder, viser jeg i denne oppgaven at valg av statistisk modell er avgjørende for prediksjonene av skadeutbetalingen (kapittel 9). Videre tester jeg ut hvordan modellprediksjonene kan brukes til å lage en realistisk prismodell, og hvordan prismodellen gir ulike resultater for de ulike prediksjonsmodellene (kapittel 10).
Total skadeutbetaling deles naturlig inn i skadefrekvens og skadepris. Jeg tester i oppgaven både modeller som modellerer disse separat, og modeller som modellerer total skadeutbetaling direkte. Jeg vil argumentere for at de direkte modellene er å foretrekke. Modellen som anbefales er en Zero-Adjusted Inverse Gaussian - ZAIG-modell, der forklaringsvariablenes funksjonelle form er valgt slik at AIC blir så lav som mulig. En ZAIG-fordelt stokastisk variabel tar verdien 0 med sannsynlighet psi og følger en Invers-Gaussisk-fordeling med sannsynlighet (1-psi). Skadepriser er såpass skjevt fordelt at det må en ekstremt skjev sannsynlighetsfordeling, som den Invers-Gaussiske, til, for å beskrive dem. Jeg vil også i oppgaven argumentere for at valg av sannsynlighetsfordeling har stor betydning for kvaliteten på prediksjonene.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/65822012-06-01T00:00:00ZStatistiske modeller basert på skjulte Markovkjeder i kontinuerlig tid
http://hdl.handle.net/1956/6581
Statistiske modeller basert på skjulte Markovkjeder i kontinuerlig tid
Harjo, Lisbet Lien
Master thesis
I denne oppgaven har jeg tatt for meg situasjoner der observasjoner over tid kan tenkes å være generert ved en Markovkjede,
og studert innvirkningen av ulike kovariater på intensitetene, med både binære og tidavhengige kovariater.
Jeg har også sett på situasjoner der tilstandene i Markovkjeden kan være observert feil, slik at man ikke virkelig observerer
de underliggende tilstandene, men bare feilklassifiserte tilstander bestemt av en sannsynlighetsfordeling.
Til slutt har jeg også testet ut en aktuell pakke for programvaren \rkode{R} som er blitt utarbeidet for å kunne regne på denne typen problemstillinger.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/65812012-06-01T00:00:00ZCopulas and Local Gaussian Correlation
http://hdl.handle.net/1956/6580
Copulas and Local Gaussian Correlation
Nordbø, Tommy Neverdahl
Master thesis
We are looking at copula models and dependence measures. Especially the recently developed local dependence measure called local Gaussian correlation (LGC) is presented, and its connection with the copula theory is explored. Theoretical LGC values of some famous (and not so famous) copula models are derived, and used to make plots that shows the dependence structure of the copula. At the end we are outlining some ways of using this dependence measure for copula selection and goodness-of-fit tests.
Wed, 30 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/65802012-05-30T00:00:00ZPasser og linjal, origami og Galoisteori
http://hdl.handle.net/1956/6579
Passer og linjal, origami og Galoisteori
Vasdal, Thomas Arneberg
Master thesis
Denne oppgaven prøver å belyse konstruksjoner med passer og linjal og origami ved hjelp av Galois- teori. De tre klassiske problemene: Kubens fordobling, sirkelens kvadratur og vinkelens tredeling er sentrale. Oppgaven er myntet særlig på lærere med lektor kompetanse i matematikk. Det er en fordel å tatt introduksjonskurset i abstrakt algebra.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/65792011-06-01T00:00:00ZTestmetoder for identifisering av publikasjonsbias i metaanalyser
http://hdl.handle.net/1956/6578
Testmetoder for identifisering av publikasjonsbias i metaanalyser
Gjerdevik, Miriam
Master thesis
Testmetoder for identifisering av publikasjonsbias i metaanalyser.
Wed, 24 Oct 2012 00:00:00 GMThttp://hdl.handle.net/1956/65782012-10-24T00:00:00ZStatistical Approach to Relatedness Analysis in Large Collections of Genetic Profiles. An Application to a DNA-Registry of Fin Whales
http://hdl.handle.net/1956/6548
Statistical Approach to Relatedness Analysis in Large Collections of Genetic Profiles. An Application to a DNA-Registry of Fin Whales
Benonisdottir, Stefania
Master thesis
The present study utilized data from an Icelandic DNA-registry of fin whales by searching for pairs of relatives with a statistical approach. Three types of relatedness were of interest, half-siblings, parent-offspring and first cousins. Detection of relatives was done by computing pairwise LOD scores for the individuals in the sample for each relatedness of interest. The corresponding p-values for each LOD score were estimated by comparing the original LOD scores with LOD scores of simulated unrelated individuals. Due to very high number of pairwise comparisons, adjustment for multiple testing was necessary. Two well known multiple adjusting methods were applied and compared, the Bonferroni correction and the FDR procedure. The FDR procedure was found to be more suitable for this analysis since the Bonferroni procedure was too conservative for such a high number of LOD scores. Eight pairs of relatives were detected within the sample. When information about age and age of maturity had been taken into account, three of those pairs were classified as a parent-offspring pair and five were classified as half-siblings. One of the detected parent-offspring pairs were a male fin whale and a foetus, also detected as a father-offspring pair in a previous study.
Tue, 20 Nov 2012 00:00:00 GMThttp://hdl.handle.net/1956/65482012-11-20T00:00:00ZStill Water Performance Simulation of a SWATH Wind Turbine Service Vessel
http://hdl.handle.net/1956/6546
Still Water Performance Simulation of a SWATH Wind Turbine Service Vessel
Angeltveit, Rune
Master thesis
In this thesis I am making a computational fluid dynamics(CFD) simulation of a
SWATH (Small Waterplane Area Twin Hull) wind turbine service vessel moving
in still water at different speeds by using the CFD tool STAR-CCM+. Since I did
not have any prior experience in CFD, a substantial part of the thesis is dedicated
to theory in CFD. First of all theory for fluid dynamics and CFD methods are
described. Based on this theory, models and solvers for the simulation in STARCCM
+ are chosen with the required boundary conditions and initial values. A
major part of the simulation work is to obtain a good mesh before the solution is
achieved.
The paper consider the total hull resistance of the vessel at different speeds due
to pressure and shear forces. The results are compared with a still water performance
test for a scaled model. Wave making resistance is also considered in the
comparison. The resistance on the four holes in the hull, where the ballast tanks
are placed, are compared with the resistance on the hull. Secondly the paper examines
how the water level inside the ballast tanks, which is open to sea at the
front and at the back, are affected at different speeds.
Tue, 20 Nov 2012 00:00:00 GMThttp://hdl.handle.net/1956/65462012-11-20T00:00:00ZModified action and differential operators on the 3-D sub-Riemannian sphere
http://hdl.handle.net/1956/6534
Modified action and differential operators on the 3-D sub-Riemannian sphere
Chang, Der-Chen; Markina, Irina; Vasiliev, Alexander
Peer reviewed; Journal article
Our main aim is to present a geometrically meaningful formula for the fundamental
solutions to a second order sub-elliptic differential equation and to the heat equation associated with
a sub-elliptic operator in the sub-Riemannian geometry on the unit sphere S3. Our method is based
on the Hamiltonian-Jacobi approach, where the corresponding Hamitonian system is solved with
mixed boundary conditions. A closed form of the modified action is given. It is a sub-Riemannian
invariant and plays the role of a distance on S3.
Wed, 01 Dec 2010 00:00:00 GMThttp://hdl.handle.net/1956/65342010-12-01T00:00:00ZOptimal dividend policies for a class of growth-restricted di usion processes under transaction costs and solvency constraints
http://hdl.handle.net/1956/6226
Optimal dividend policies for a class of growth-restricted di usion processes under transaction costs and solvency constraints
Bai, Lihua; Hunting, Martin; Paulsen, Jostein
Peer reviewed; Journal article
In this paper, we consider a company where surplus follows a rather general di usion
process and whose objective is to maximize expected discounted dividend payments. With
each dividend payment there are transaction costs and taxes and it is shown in [7] that
under some reasonable assumptions, optimality is achieved by using a lump sum dividend
barrier strategy, i.e. there is an upper barrier u and a lower barrier u so that whenever
surplus reaches u , it is reduced to u through a dividend payment. However, these optimal
barriers may be unacceptably low from a solvency point of view. It is argued that in that
case one should still we should look for a barrier strategy, but with barriers that satisfy a
given constraint. We propose a solvency constraint similar to that in [6]; whenever dividends
are paid out the probability of ruin within a xed time T and with the same strategy in
the future, should not exceed a predetermined level ". It is shown how optimality can be
achieved under this constraint, and numerical examples are given.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/62262012-01-01T00:00:00ZExistence of a classical solution of a parabolic PIDE associated with ruin probability
http://hdl.handle.net/1956/6225
Existence of a classical solution of a parabolic PIDE associated with ruin probability
Hunting, Martin
In this article we will prove existence of a classical solution of the
integro-differential equation for ruin probability in finite time stated in
Paulsen (2008).
Mon, 18 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/62252012-06-18T00:00:00ZA numerical approach to ruin probability in finite time for fitted models with investment
http://hdl.handle.net/1956/6224
A numerical approach to ruin probability in finite time for fitted models with investment
Hunting, Martin
In this paper we present a numerical method for solving a partial
integro-differential equation (PIDE) associated with ruin probability, when
the surplus is continuously invested in stochastic assets. The method uses
precalculated Gaussian quadrature rules for the numerical integration.
Except for the numerical integration part, the method is based largely
on the finite differences method used in Halluin et al. (2005) for a PIDE
associated with a more general option pricing problem. In our numerical
examples we use historical data for inflation and returns on U.S. Treasury
bills, U.S. Treasury bonds and American stocks. The log-returns of the
investments are adjusted for an assumed constant force of inflation. We
consider four different strategies for continuous investment: (a) U.S. Treasury
bills with a constant maturity of 3 months, (b) U.S. Treasury bonds
with a constant maturity of 10 years, and (c) the Standard and Poor 500
index and (d) another index of American stocks. For each of these strategies
a geometric Brownian motion process is fitted to the aforementioned
historical data. The results suggest that the ruin probabilities obtained
can vary substantially, depending on whether the models are fitted to data
for the last decade or for a longer time period. We also discuss numerical
solution of investment models with jumps.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/62242012-01-01T00:00:00ZOptimal dividend policies with transaction costs for a class of jump-diffusion processes
http://hdl.handle.net/1956/6214
Optimal dividend policies with transaction costs for a class of jump-diffusion processes
Hunting, Martin; Paulsen, Jostein
Peer reviewed; Journal article
This paper addresses the problem of finding an optimal dividend policy for a class of jump-diffusion processes. The jump component is a compound Poisson process with negative jumps, and the drift and diffusion components are assumed to satisfy some regularity and growth restrictions. Each dividend payment is changed by a fixed and a proportional cost, meaning that if ξ is paid out by the company, the shareholders receive kξ−K, where k and K are positive. The aim is to maximize expected discounted dividends until ruin. It is proved that when the jumps belong to a certain class of light-tailed distributions, the optimal policy is a simple lump sum policy, that is, when assets are equal to or larger than an upper barrier uˉ∗ , they are immediately reduced to a lower barrier u−∗ through a dividend payment. The case with K=0 is also investigated briefly, and the optimal policy is shown to be a reflecting barrier policy for the same light-tailed class. Methods to numerically verify whether a simple lump sum barrier strategy is optimal for any jump distribution are provided at the end of the paper, and some numerical examples are given.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/62142012-01-01T00:00:00ZRuin probability and optimal dividend policy for models with investment
http://hdl.handle.net/1956/6213
Ruin probability and optimal dividend policy for models with investment
Hunting, Martin
Doctoral thesis
<p>In most countries the authorities impose capital requirements on insurance companies in order to avoid the adverse consequences to society when insurance companies default on claims. Since holding capital is costly, this naturally leads to the problem of deciding how large the risk reserve needs to be, or what is a "safe" level of liquidity. A common answer is that the probability that the insurance company will default on policyholder claims should not be higher than a certain small level Є. An implementation of this policy requires reasonably
accurate methods for determining this probability, known as the ruin probability.</p>
<p>Rigorous mathematical treatments of the ruin probability problem can be traced at least as far back as the acclaimed doctoral thesis of Filip Lundberg from 1903 with the title "Approximerad framställning af sannolikhetsfunktionen". Traditionally the focus has been on ruin probability on an infinite time horizon. In these models an insurance company can avoid ruin by allowing its risk reserve to grow toward infinity. At the 15th International Congress of Actuaries in 1957 Bruno de Finetti criticized this approach. In particular he couldn't see why
an older company should hold more capital than a younger one bearing similar risks, only because it is older. As an alternative de Finetti formulated what is known as the "de Finetti’s dividend problem": Maximizing the expected sum of the discounted paid out dividends from time zero until ruin. Since then several papers have presented solutions to this problem for various risk processes. Two of the papers in this thesis, which we denote Paper A and Paper B, focus on de Finetti’s dividend problem, with the risk process following a general diffusion and a jump-diffusion process, respectively. These models are particularly relevant for insurance companies where the premium income is invested in assets with stochastic returns. In keeping with de Finetti’s original paper, where ruin probability played a central role, Paper A also discusses solutions of de Finetti’s dividend problem under solvency constraints.</p>
<p>In the last few decades a growing number of papers have focused on ruin probability on a finite time horizon. For short time spans the assumption that the risk reserve is allowed to grow freely is less spurious. An important tool for calculating the ruin probability on a finite horizon is solving certain partial integro-differential equations (PIDEs). The third paper, denoted Paper C, discusses how these PIDEs can be solved numerically. The last paper, denoted Paper D, discusses regularity properties for some of these PIDEs.</p>
In the electronic version of the thesis the published version of paper I has been replaced with the accepted version.
Tue, 06 Nov 2012 00:00:00 GMThttp://hdl.handle.net/1956/62132012-11-06T00:00:00ZGasslekkasjar i Marine miljø
http://hdl.handle.net/1956/6052
Gasslekkasjar i Marine miljø
Aase, Stig Andre
Master thesis
I denne masteroppgåva studerast fluidstrøymingar gjennom marine sediment, og målet er å vurdere om ein enkel matematisk modell klarar å beskrive og bevare karakteristiske trekk for prosessen. Denne drøftinga vil baserast på samanlikning med dei få reelle data vi har til rådigheit. I denne samanhengen er det valt å ta utgangspunkt i Nyeggaregionen i Norskehavet, der den store tettleiken av krater på havbotnen ("pockmarks") vitnar om mykje vertikal fluidrørsle. Det vil i denne oppgåva også bli gjeve ei innføring i den Genetiske Algoritme, Monte Carlo-metodar og parallellisering av Matlab-kodar, som alle er sentrale verkty nytta for å svare på den gjeldande problemstillinga.
Tue, 19 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/60522012-06-19T00:00:00ZNonholonomic geometry on finite and infinite dimensional Lie groups and rolling manifolds
http://hdl.handle.net/1956/5784
Nonholonomic geometry on finite and infinite dimensional Lie groups and rolling manifolds
Grong, Erlend
Doctoral thesis
Part I: We start by giving some background on the topics discussed in this thesis.
The main topic of the thesis is nonholonomic geometry. In Chapter 1 we give an
introduction of nonholonomic geometry in the context of geometric control theory. In
a brief exposition, we try to give an overview of the areas of sub-Riemannian and
sub-Lorentzian geometry, stating several of the most important results in this area. A
historical account concludes this chapter.
Chapters 2 and 3 consist of mathematical prerequisits for the later presented results.
However, these chapters mainly focus on certain selected facts, rather than trying to give
an overview of a whole topic. Chapter 2 contains some results from differential geometry
related to submersions and geodesic curvatures. Chapter 3 gives introductory remarks
on the convenient calculus of infinite dimensional manifolds.
Chapter 4, the last chapter in part I, gives a short presentation and summary of the
main results of the papers included in Part II. We first present the results of Paper
B, regarding sub-Riemannian and sub-Lorentzian geometry on the universal cover of
SU(1, 1). The results in Papers C, D and F are then considered, which concern the
nonholonomic dynamical system of two manifolds rolling on each other without twisting
or slipping. Finally, we present some results in infinite dimensional manifolds in Paper
A and Paper F. In particular, Paper F contains a generalization of sub-Riemannian
geometry to the infinite dimensional setting.
Part I ends with the bibliography of the 4 first chapters.
Part II: Here, six papers are included, Papers A to F. Papers are listed in chronological
order according to their date of completion. Two of them are published, one is accepted
for publication and three are submitted.
Fri, 30 Mar 2012 00:00:00 GMThttp://hdl.handle.net/1956/57842012-03-30T00:00:00ZBoundary-value problems and shoaling analysis for the BBM equation
http://hdl.handle.net/1956/5772
Boundary-value problems and shoaling analysis for the BBM equation
Senthilkumar, Amutha
Master thesis
In this thesis we study the BBM equation
u_t+u_x+ \frac{3}{2}uu_x - \frac{1}{6} u_{xxt}= 0
which describes approximately the two-dimensional propagation of surface waves in a uniform horizontal channel containing an incompressible and inviscid fluid which in its undisturbed state has depth $h$. Here $u(x,t)$ represents the deviation of the water surface from its undisturbed position, and the flow is assumed to be irrotational.
The BBM equation features a bounded dispersion relation (Benjamin, Bona and Mahony ). We utilize this boundedness to prove existence, uniqueness and regularity results for solutions of the BBM equation supplemented with an initial condition and various types of boundary conditions. We also treat the water-wave problem over an uneven bottom. In particular, we consider two
different models for the propagation of long waves in channels of decreasing depth and we provide both analytical and numerical results for these models. For the numerical simulation we use a spectral discretization coupled with a four-stage Runge-Kutta time integration scheme.
After verifying numerically that the algorithm is fourth-order accurate in time,
we run the solitary wave with uneven bottom and examine how solitary waves respond to this non-uniform depth. Our numerical simulations are compared with previous numerical and experimental results of Madsen and Mei and Peregrine.
Mon, 20 Feb 2012 00:00:00 GMThttp://hdl.handle.net/1956/57722012-02-20T00:00:00ZA study in the effects of different treatments on the growth of high fluorescence virus.
http://hdl.handle.net/1956/5771
A study in the effects of different treatments on the growth of high fluorescence virus.
Maharjan, Bikram
Master thesis
Thu, 01 Sep 2011 00:00:00 GMThttp://hdl.handle.net/1956/57712011-09-01T00:00:00ZThe Flow of Quasiconformal Mappings on S³ with Contact structure and a Family of Surfaces on the Heisenberg Group
http://hdl.handle.net/1956/5770
The Flow of Quasiconformal Mappings on S³ with Contact structure and a Family of Surfaces on the Heisenberg Group
Lavrichenko, Ksenia
Master thesis
Tue, 31 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/57702011-05-31T00:00:00ZGode rank 1 latticereglar av høg dimensjon
http://hdl.handle.net/1956/5769
Gode rank 1 latticereglar av høg dimensjon
Topphol, Vegard
Master thesis
Oppgåva omhandlar numeriske integrasjonsreglar av
trigonometrisk grad kalla latticereglar, med
særskild fokus på å utvikle og teste algoritmer
for å finne gode latticereglar av rank 1. For at
teksten skal kunne stå sjølvstendig har også
nødvendig teori blitt gjennomgått.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/57692011-06-01T00:00:00ZReactive Transport in Porous Media
http://hdl.handle.net/1956/5768
Reactive Transport in Porous Media
Sævik, Pål Næverlid
Master thesis
In this thesis, we show how the common equations for flow in porous media can be expanded to account for geochemical reactions. Furthermore, the complications arising when solving the new equations numerically are described and explained. Specialised methods that alleviate the difficulties are then introduced, and discussed with respect to robustness and convergence properties. Much attention is directed to ways of reformulating the equations, in order to make them more amenable to numerical treatment. Also, the fact that chemical reactions introduce stiffness to the system is adressed. Diagonally implicit Runge-Kutta methods, which are commonly used to combat stiffness, are evaluated with respect to their usefulness in CO2 sequestration simulations. Finally, we have applied the methods to several test cases, some including complex mineralogies, to illustrate the strengths and weaknesses of the different numerical approaches.
Mon, 01 Aug 2011 00:00:00 GMThttp://hdl.handle.net/1956/57682011-08-01T00:00:00ZModulus method and its application to the theory of univalent functions
http://hdl.handle.net/1956/5767
Modulus method and its application to the theory of univalent functions
Belyaeva, Elena Vasilievna
Master thesis
This work is about the modulus method in univalent function theory. It is based on the notion of modulus of families of curves on Riemann surface and is turned out to be very useful in solving various extremal problems in conformal and quasiconformal mappings.
Mon, 23 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/57672011-05-23T00:00:00ZInjective braids, braided operads and double loop spaces
http://hdl.handle.net/1956/5766
Injective braids, braided operads and double loop spaces
Solberg, Mirjam
Master thesis
We construct the category of B-spaces, which is a braided monoidal diagram category. This category is Quillen equivalent to the category of simplicial sets. The induced equivalence of homotopy categories maps a commutative B-spaces monoid to a space that is weakly equivalent to a double loop space, if it is connected. If X is a connected space, we find a commutative B-space monoid, such that the homotopy colimit of it is weakly equivalent to doubleloops(doublesuspension(X)). Similarly we find a commutative B-space monoid that represents the nerve of a braided strict monoidal category.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/57662011-06-01T00:00:00ZComparison of distance measures in Multimodal registration of medical images
http://hdl.handle.net/1956/5765
Comparison of distance measures in Multimodal registration of medical images
Nyfløtt, Jon Kirkebø
Master thesis
In this thesis I have worked with medical image registration, in particular registration of multimodal images. I have tested the image registration software package FAIR developed by Jan Modersitzki, with particular focus on the choice of distance measure in image registration. I have tested the distance measures included in FAIR, in particular Normalized Gradient Field, the distance measure developed by Modersizki, and compared them.
In addition, I have developed a new distance measure called Normalized Hessian Fields and implemented it with the FAIR software. This has been compared to the other distance measures. I have also tested a combination of Normalized Gradient and Hessian Fields.
The registration has been performed on multimodal brain data and kidney data provided by Haukeland University Hospital, Bergen.
Tue, 14 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/57652011-06-14T00:00:00ZRobusthet ved skadereservering - Chain-Ladder og influens funksjoner
http://hdl.handle.net/1956/5761
Robusthet ved skadereservering - Chain-Ladder og influens funksjoner
Ones, Sindre
Master thesis
I denne oppgaven har jeg sett på hvordan uteliggere kan påvirke IBNR reserven i skadereservering. Jeg har studert influens funksjoner for den klassiske Chain-Ladder metoden, og videre sett at uteliggere kan påvirke Chain-Ladder estimatet for den totale reserven i stor grad. Jeg har videre tatt for meg den klassiske Chain-Ladder metoden, som er mye brukt ved IBNR reservering, og sett på en modifisering av metoden, slik at metoden skal fange opp uteliggere, og gi oss et innblikk i hva IBNR reserven vil være uten disse uteliggerne i datasettet. Dette har jeg gjort for tre ulike datasett, hvor resultatene ligger i kapittel 7. Ved å bruke den robuste Chain-Ladder metoden som et hjelpeverktøy, er det naturlig å sammenligne resultatene fra den robuste metoden mot den klassiske Chain-Ladder metoden. For resultater der avviket er stort, kanskje bare et par prosent forskjell, er det nødvendig å gå nærmere inn på datasettet man jobber med, og undersøke hva uteliggerne egentlig skyldes, og hvor sannsynlig det er at slike ekstreme observasjoner vil oppstå igjen.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/57612011-06-01T00:00:00ZIntervall mellom fødslar studert ved "Phase Type Distributions"
http://hdl.handle.net/1956/5759
Intervall mellom fødslar studert ved "Phase Type Distributions"
Asperheim, Leiv Magne
Master thesis
Omhandlar tida mellom første og andre barn studert ved "Phase Type Distributions". Ser på desse tidene som overlevelsestider og prøver å tilpasse ein phase type fordeling til datasettet me ser på. Brukar også glatting av Nelson-Aalen estimatoren for å finne ein fordeling for overlevelsestidene. Utifra fordelinga me finn studerer me to forskjellige datasett som er samla inn på ulike tidspunkt. Dette gjev oss informasjon om det er nokon endringar i fødselsmønsteret i løpet av den tida som er gått mellom datasetta.
Tue, 31 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/57592011-05-31T00:00:00ZStatistiske modellar for alder-periode-kohort-effektar i epidemiologi
http://hdl.handle.net/1956/5752
Statistiske modellar for alder-periode-kohort-effektar i epidemiologi
Forland, Gunhild
Master thesis
Denne oppgåva ser på teorien for alder-periode-kohort-modellar og korleis desse kan implementerast i statistikkprogrammet R. Det er òg sett på korleis splinefunksjonar kan brukast i alder-periode-kohort-modellar og korleis desse modellane er brukte i praksis i kreftepidemiologi.
Fri, 20 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/57522011-05-20T00:00:00ZOpen-source MATLAB implementation of consistent discretisations on complex grids
http://hdl.handle.net/1956/5745
Open-source MATLAB implementation of consistent discretisations on complex grids
Lie, Knut-Andreas; Krogstad, Stein; Ligaarden, Ingeborg Skjelkvåle; Natvig, Jostein Roald; Nilsen, Halvor Møll; Skaflestad, Bård
Peer reviewed; Journal article
Accurate geological modelling of features
such as faults, fractures or erosion requires grids that
are flexible with respect to geometry. Such grids generally
contain polyhedral cells and complex grid-cell connectivities.
The grid representation for polyhedral grids
in turn affects the efficient implementation of numerical
methods for subsurface flow simulations. It is well
known that conventional two-point flux-approximation
methods are only consistent for K-orthogonal grids and
will, therefore, not converge in the general case. In
recent years, there has been significant research into
consistent and convergent methods, including mixed,
multipoint and mimetic discretisation methods. Likewise,
the so-called multiscale methods based upon hierarchically
coarsened grids have received a lot of
attention. The paper does not propose novel mathematical
methods but instead presents an open-source
Matlab® toolkit that can be used as an efficient test
platform for (new) discretisation and solution methods
in reservoir simulation. The aim of the toolkit is to
support reproducible research and simplify the development,
verification and validation and testing and
comparison of new discretisation and solution methods
on general unstructured grids, including in particular
corner point and 2.5D PEBI grids. The toolkit consists
of a set of data structures and routines for creating, manipulating and visualising petrophysical data, fluid
models and (unstructured) grids, including support for
industry standard input formats, as well as routines
for computing single and multiphase (incompressible)
flow.We review key features of the toolkit and discuss a
generic mimetic formulation that includes many known
discretisation methods, including both the standard
two-point method as well as consistent and convergent
multipoint and mimetic methods. Apart from the core
routines and data structures, the toolkit contains addon
modules that implement more advanced solvers and
functionality. Herein, we show examples of multiscale
methods and adjoint methods for use in optimisation of
rates and placement of wells.
Tue, 02 Aug 2011 00:00:00 GMThttp://hdl.handle.net/1956/57452011-08-02T00:00:00ZSampling error distribution for the ensemble Kalman filter update step
http://hdl.handle.net/1956/5744
Sampling error distribution for the ensemble Kalman filter update step
Kovalenko, Andrey; Mannseth, Trond; Nævdal, Geir
Peer reviewed; Journal article
In recent years, data assimilation techniques
have been applied to an increasingly wider specter
of problems. Monte Carlo variants of the Kalman
filter, in particular, the ensemble Kalman filter (EnKF),
have gained significant popularity. EnKF is used for
a wide variety of applications, among them for updating
reservoir simulation models. EnKF is a Monte
Carlo method, and its reliability depends on the actual
size of the sample. In applications, a moderately sized
sample (40–100 members) is used for computational
convenience. Problems due to the resulting Monte
Carlo effects require a more thorough analysis of the
EnKF. Earlier we presented a method for the assessment
of the error emerging at the EnKF update step
(Kovalenko et al., SIAM J Matrix Anal Appl, in press).
A particular energy norm of the EnKF error after
a single update step was studied. The energy norm
used to assess the error is hard to interpret. In this
paper, we derive the distribution of the Euclidean norm
of the sampling error under the same assumptions as
before, namely normality of the forecast distribution and negligibility of the observation error. The distribution
depends on the ensemble size, the number and
spatial arrangement of the observations, and the prior
covariance. The distribution is used to study the error
propagation in a single update step on several synthetic
examples. The examples illustrate the changes in reliability
of the EnKF, when the parameters governing the
error distribution vary.
Wed, 06 Jul 2011 00:00:00 GMThttp://hdl.handle.net/1956/57442011-07-06T00:00:00ZOn flow properties of surface waves described by model equations of Boussinesq type
http://hdl.handle.net/1956/5729
On flow properties of surface waves described by model equations of Boussinesq type
Ali, Alfatih Mohammed A.
Doctoral thesis
Fri, 23 Mar 2012 00:00:00 GMThttp://hdl.handle.net/1956/57292012-03-23T00:00:00ZQuality of the Analysis Step in EnKF
http://hdl.handle.net/1956/5611
Quality of the Analysis Step in EnKF
Bergo, Brede Rem
Master thesis
In many physical applications we want to characterize the parameters of a system based on indirect observations or measurements.
In a reservoir simulator setting, the goal is to simulate the production of hydrocarbons from the reservoir. This way we can try out different production strategies and optimize the production plan before the reservoir is put on production. These decisions depend on good simulations of the flow of oil, gas and water in the porous rocks.
To achieve appropriate flow calculations, a good estimate of the flow properties of the rock is needed. The process of building an approximation to the reservoir itself and its properties is called reservoir modeling or reservoir characterization. For this, prior information is used, like well logs, analyzed core plugs from the appraisal wells and seismic data. This information gives us some estimate of our poorly known reservoir parameters, like the porosity and permeability fields.The performance of the reservoir, given a recovery strategy, can be predicted by a reservoir simulator. After the field is put on production one may use the production data to improve the reservoir model. The basic idea is that predicted performance should match the observed performance. By tuning the parameters in the model, one tries to fit the output of the simulator to the production history. This is referred to as history matching, which is a nonlinear inverse problem.
A promising method to automatically perform the history matching is the Ensemble Kalman Filter. EnKF is a sequential data assimilation algorithm using Monte Carlo techniques where measurements and prior information about the system is combined to make the best weighted estimate based on their uncertainties. After the assimilation, the model is run forward in time using the reservoir simulator. When new observations or data are available, the next analysis step will incorporate the new observations to produce a new analyzed estimate.
A large number of data assimilated at the same time has proved to be a difficult challenge for EnKF. This could correspond to the use of e.g. 4D seismic data. One computational advantage is that the covariance matrix of the system is never explicitly calculated, but rather approximated from the ensemble itself. However, spurious correlations in the ensemble sample covariance matrix is one problem to be addressed. In particular, properties in cells far away from the location of measurements are affected in too great scale. EnKF is based on the Kalman Filter, which is a recursive filter for linear problems.
In this master thesis we consider the quality of the analysis step of the EnKF. Our main focus is the sampling errors caused by the approximated sample covariance matrix when a increasing number of measurements are assimilated.The work here is inspired by [KovalenkoSamplingError2011, KovalenkoECMOR], where a probabilistic measure for the sampling error is derived under the assumptions of a normally distributed prior and negligible measurement errors.Here we try a somewhat different approach using approximate calculations and Neumann series to asses the sampling error. We consider measurement errors of varying size.
Wed, 15 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/56112011-06-15T00:00:00ZKarakterisering av strømning gjennom en forkastning med fault facies- og standardmodell ved bruk av reservoarsimulator
http://hdl.handle.net/1956/5597
Karakterisering av strømning gjennom en forkastning med fault facies- og standardmodell ved bruk av reservoarsimulator
Carstensen, Carl-Martin
Master thesis
Forkastninger har en betydelig innvirkning på strømninger i et reservoar. Hittil har forkastninger blitt modellert som en flate eller et plan i vanlige reservoarsimulatorer, men studier har vist at forkastninger ofte må regnes for å være volumer og ikke flater. En fault facies-modell tar høyde for dette ved å modellere forkastningene som volumer med facies som inneholder forkastningsegenskapene, men numerisk løsning av denne modellen er svært tidkrevende og i enkelte tilfeller umulig.
I dette studiet har forkastningsstrømmene blitt studert i reservoarsimulatoren ECLIPSE. To modeller har blitt modellert, en standardmodell hvor forkastningen er modellert som en flate, og en fault facies-modell der forkastningen er modellert som et volum.
Formålet med studiet var å undersøke hvordan forkastningsstrømmene var, om standardmodellen simulerte forkastningsstrømmene tilstrekkelig nøyaktig og om resultatene fra fault facies-modellen kunne reproduseres i en ekvivalent standardmodell (utvidet standardmodell).
Det ble gjort mange ulike kjøringer med forskjellige forkastningsegenskaper. Barrierer ble implementert i både reservoaret og forkastningen. Resultatene fra studiet indikerte at standardmodellen ikke var tilstrekkelig nøyaktig. Det lyktes å gjenskape metningsbildet fra fault facies-modellen på en utvidet standardmodell, men trykkbildet var forskjellig mellom modellene. Det ble derfor konkludert med at fault facies-modellen bør anvendes når forkastningsmodeller skal simuleres.
Mon, 21 Nov 2011 00:00:00 GMThttp://hdl.handle.net/1956/55972011-11-21T00:00:00ZOribatid mites in a changing world
http://hdl.handle.net/1956/5561
Oribatid mites in a changing world
De la Riva Caballero, Arguitxu
Doctoral thesis
The main scope of this thesis is to
illustrate the validity of oribatid mites as
tools for palaeoecological
reconstructions. Palaeoecology studies
the responses of past organisms to past
environmental changes. This can be
accomplished through the use of
biological proxies, which are indicators
of past conditions. The search for
additional means of distinguishing
climate change has only recently led to
the use of other commonly found
biological proxies such as tiny oribatid
mites known as moss-mites. Oribatid
mites are among the most numerous
biological remains in anoxic sediments,
yet until now oribatids have not been
widely used due to the uncertainties
about their present distribution and the
lack of expertise to identify them to
species level. This thesis contains four
papers which provide evidence about
how oribatid mites, when they are
properly identified to species level and
their background distribution is
adequately known, can give useful
additional and supporting information
for reconstructing past habitat and
environmental conditions.
Paper I studied oribatid
preferences and ecology in different
habitats, mainly forested, in western
Norway. One hundred and ninety two
species were found of which 64 were new records for Norway. The species
Chamobates borealis, Oppiella nova,
Moritzoppia neerlandica, and
Rhinoppia subpectinata characterised
the oribatid communities of Betula,
mixed, and Picea forest subsets.
Deciduous forest oribatid communities
were characterised by Achipteria
coleoptrata, Acrotritria ardua,
Ceratozetes gracilis, and Oribatella
calcarata. Hemileius initialis,
Nanhermannia dorsalis, C. borealis,
Tectocepheus velatus, and Atropacarus
striculus characterised wet habitats. In
water-logged habitats, Limnozetes
ciliatus, Mucronothrus nasalis, and
Trimalaconothrus glaber dominated.
Carabodes labyrinthicus, C.
marginatus, Melanozetes mollicomus,
and T. velatus characterised the oribatid
community of the lichen and moss
subset. The tree-line ecotone was
dominated by the euryceous species H.
initialis, T. velatus, and Oribatula
tibialis. This study represents a
thorough survey of oribatid
communities in western Norway, and
the insights it gives are an important
tool for habitat reconstructions, as they
provide the background knowledge
about modern oribatid fauna needed to
identify the type of past plant
community and past environments
represented in Quaternary sediments.
Paper II studied the oribatid
communities at the tree-line in western
Norway and compared them with the oribatid fossil assemblages found in
Lake Trettetjørn. The modern oribatid
assemblage provided a guide to the
reliability of the fossil assemblages to
reconstruct ecological and
environmental changes and, in addition,
to find the most favourable coring point
within the small lake. Results showed
that the core retrieved from the middle
of Lake Trettetjørn basin represented
the oribatid fauna from the catchment
area. Aquatic oribatids were the best
group represented in the lake sediments,
followed by oribatids from the habitats
adjacent to the lake. This constitutes
good evidence that oribatids are
excellent indicators of local habitats.
Comparison of the oribatid fauna found
in the lake traps with the oribatid
assemblages from paper III illustrated
the importance of identifying the mites
to species level, as this increased the
ecological indicator value and,
therefore, the reliability of the
palaeoreconstructions.
In Paper III, sub-fossil oribatid
mites, pollen, plant macrofossils, and
diatoms from a lake sediment core from
western Norway were studied. This
multi-proxy study attempted to
reconstruct tree-line fluctuations and
their impact on Lake Trettetjørn’s
environment. Evidence from pollen,
plant macrofossils, and oribatids
complemented and corroborated each
other in the reconstruction of the
vegetational development. A semi-open
grassland developed into forest. Mires began to replace forested areas on
the landscape as a more oceanic climate
began to prevail. All proxies indicated
increasingly intensive human land-use
as the Upsete settlement grew to
accommodate the construction of the
Bergen-Oslo railway.
In Paper IV, oribatid mites and
pollen were used to reconstruct the local
habitat at an archaeological excavation.
The study aimed to identify the start of
cereal cultivation at Kvitevoll farm, on
Halsnøy island, western Norway. The
high number of oribatid remains
identified to species level and the close
match to pollen stratigraphy led to a
detailed palaeoenvironmental
reconstruction. Oribatids and pollen
indicated the development of a moist
forest followed by vegetation openings
and mire expansion over the site. At the
top of the sequence, the presence of
oribatids such as Tectocepheus velatus
and the increase in members of the
family Oppiidae indicated a higher
degree of disturbance, probably from
grazing. Pollen of Cerealia indicated the
start of cultivation around the same
time.
Fri, 09 Dec 2011 00:00:00 GMThttp://hdl.handle.net/1956/55612011-12-09T00:00:00ZEarly prediction of response to radiotherapy and androgen-deprivation therapy in prostate cancer by repeated functional MRI: a preclinical study
http://hdl.handle.net/1956/5549
Early prediction of response to radiotherapy and androgen-deprivation therapy in prostate cancer by repeated functional MRI: a preclinical study
Røe, Kathrine; Kakar, Manish; Seierstad, Therese; Ree, Anne H.; Olsen, Dag R.
Peer reviewed; Journal article
Background: In modern cancer medicine, morphological magnetic resonance imaging (MRI) is routinely used in
diagnostics, treatment planning and assessment of therapeutic efficacy. During the past decade, functional imaging
techniques like diffusion-weighted (DW) MRI and dynamic contrast-enhanced (DCE) MRI have increasingly been
included into imaging protocols, allowing extraction of intratumoral information of underlying vascular, molecular
and physiological mechanisms, not available in morphological images. Separately, pre-treatment and early changes
in functional parameters obtained from DWMRI and DCEMRI have shown potential in predicting therapy response.
We hypothesized that the combination of several functional parameters increased the predictive power.
Methods: We challenged this hypothesis by using an artificial neural network (ANN) approach, exploiting nonlinear
relationships between individual variables, which is particularly suitable in treatment response prediction involving
complex cancer data. A clinical scenario was elicited by using 32 mice with human prostate carcinoma xenografts
receiving combinations of androgen-deprivation therapy and/or radiotherapy. Pre-radiation and on days 1 and 9
following radiation three repeated DWMRI and DCEMRI acquisitions enabled derivation of the apparent diffusion
coefficient (ADC) and the vascular biomarker Ktrans, which together with tumor volumes and the established
biomarker prostate-specific antigen (PSA), were used as inputs to a back propagation neural network,
independently and combined, in order to explore their feasibility of predicting individual treatment response
measured as 30 days post-RT tumor volumes.
Results: ADC, volumes and PSA as inputs to the model revealed a correlation coefficient of 0.54 (p < 0.001)
between predicted and measured treatment response, while Ktrans, volumes and PSA gave a correlation coefficient
of 0.66 (p < 0.001). The combination of all parameters (ADC, Ktrans, volumes, PSA) successfully predicted treatment
response with a correlation coefficient of 0.85 (p < 0.001).
Conclusions: We have in a preclinical investigation showed that the combination of early changes in several
functional MRI parameters provides additional information about therapy response. If such an approach could be
clinically validated, it may become a tool to help identifying non-responding patients early in treatment, allowing
these patients to be considered for alternative treatment strategies, and, thus, providing a contribution to the
development of individualized cancer therapy.
Wed, 08 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/55492011-06-08T00:00:00ZConvective mixing in geological carbon storage
http://hdl.handle.net/1956/5540
Convective mixing in geological carbon storage
Elenius, Maria
Doctoral thesis
The industrial era has seen an exponential growth in the atmospheric concentration
of carbon dioxide (CO2), resulting mainly from the burning of fossil fuels. This
can cause changes in the climate that have severe impacts on freshwater and food
supply, ecosystems and society. One of the most viable options to reduce CO2
emissions is to store it in geological formations, in particular in saline aquifers.
In this option, the carbon is again stored in the subsurface, from which it was
extracted. The first geological storage project was initiated in Norway, in 1996,
and CO2 has long before been injected to geological formations to enhance oil
recovery. Storage occurs with CO2 in a so-called supercritical state and this fluid
is buoyant in the formation. Four physical mechanisms help trapping the carbon
in the formation: the CO2 plume accumulates under a low-permeability caprock;
CO2 is trapped as disconnected drops in small pores; buoyancy is lost when CO2
dissolves into the water; and on longer time-scales chemical reactions incorporate
the carbon in minerals. Dissolution trapping is largely determined by convective
mixing, which is a rich problem that was first investigated almost 100 years ago.
We investigate the influence of convective mixing on dissolution trapping in geological
storage of CO2. Most formations that can be used for CO2 storage are slightly tilted. We show
with numerical simulations that dissolution trapping must in general be acknowledged
when questions about the final migration distance and time of the CO2
plume are to be answered. The saturations in the plume correspond well to transition
zones consistent with capillary equilibrium. The results also show that the
capillary transition zone, in which both the supercritical CO2 plume and the water
phase exist and are mobile, participates in the convective mixing. Using linear
stability analysis complemented with numerical simulations, we show that the
interaction between convective mixing and the capillary transition zone leads to
considerably larger dissolution rates and a reduced onset time for enhanced convective
mixing compared to when this interaction is neglected. The selection of
the wavelength that first becomes unstable remains almost unchanged by the interaction.
A statistical investigation of the onset time of enhanced convective mixing
under the neglection of the capillary transition zone reveals that it is notably
larger than the onset time of instability for three example formations. However,
comparison of these simulation results with the investigations in a sloping aquifer
preliminarily suggest that the distance that the plume propagates during the onset
time of enhanced convective mixing is negligible and that therefore this time
can be assumed to be zero, with the possible exception of aquifers that have steep
slopes.
Fri, 25 Nov 2011 00:00:00 GMThttp://hdl.handle.net/1956/55402011-11-25T00:00:00ZCo2 trapping in sloping aquafiers: High resolution numerical simulations
http://hdl.handle.net/1956/5539
Co2 trapping in sloping aquafiers: High resolution numerical simulations
Elenius, Maria; Tchelep, Hamdi A.; Johannsen, Klaus
Conference object
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/55392010-01-01T00:00:00ZLiposomal doxorubicin improves radiotherapy response in hypoxic prostate cancer xenografts
http://hdl.handle.net/1956/5481
Liposomal doxorubicin improves radiotherapy response in hypoxic prostate cancer xenografts
Hagtvet, Eirik; Røe, Kathrine; Olsen, Dag R.
Peer reviewed; Journal article
Background: Tumor vasculature frequently fails to supply sufficient levels of oxygen to tumor tissue resulting in
radioresistant hypoxic tumors. To improve therapeutic outcome radiotherapy (RT) may be combined with cytotoxic
agents.
Methods: In this study we have investigated the combination of RT with the cytotoxic agent doxorubicin (DXR)
encapsulated in pegylated liposomes (PL-DXR). The PL-DXR formulation Caelyx® was administered to male mice
bearing human, androgen-sensitive CWR22 prostate carcinoma xenografts in a dose of 3.5 mg DXR/kg, in
combination with RT (2 Gy/day × 5 days) performed under normoxic and hypoxic conditions. Hypoxic RT was
achieved by experimentally inducing tumor hypoxia by clamping the tumor-bearing leg five minutes prior to and
during RT. Treatment response evaluation consisted of tumor volume measurements and dynamic contrastenhanced
magnetic resonance imaging (DCE MRI) with subsequent pharmacokinetic analysis using the Brix model.
Imaging was performed pre-treatment (baseline) and 8 days later. Further, hypoxic fractions were determined by
pimonidazole immunohistochemistry of excised tumor tissue.
Results: As expected, the therapeutic effect of RT was significantly less effective under hypoxic than normoxic
conditions. However, concomitant administration of PL-DXR significantly improved the therapeutic outcome
following RT in hypoxic tumors. Further, the pharmacokinetic DCE MRI parameters and hypoxic fractions suggest
PL-DXR to induce growth-inhibitory effects without interfering with tumor vascular functions.
Conclusions: We found that DXR encapsulated in liposomes improved the therapeutic effect of RT under hypoxic
conditions without affecting vascular functions. Thus, we propose that for cytotoxic agents affecting tumor vascular
functions liposomes may be a promising drug delivery technology for use in chemoradiotherapy.
Fri, 07 Oct 2011 00:00:00 GMThttp://hdl.handle.net/1956/54812011-10-07T00:00:00ZGenotyping errors in a calibrated DNA register: implications for identification of individuals
http://hdl.handle.net/1956/5480
Genotyping errors in a calibrated DNA register: implications for identification of individuals
Haaland, Øystein; Glover, Kevin; Seliussen, Bjørghild Breistein; Skaug, Hans J.
Peer reviewed; Journal article
Background: The use of DNA methods for the identification and management of natural resources is gaining
importance. In the future, it is likely that DNA registers will play an increasing role in this development.
Microsatellite markers have been the primary tool in ecological, medical and forensic genetics for the past two
decades. However, these markers are characterized by genotyping errors, and display challenges with calibration
between laboratories and genotyping platforms. The Norwegian minke whale DNA register (NMDR) contains
individual genetic profiles at ten microsatellite loci for 6737 individuals captured in the period 1997-2008. These
analyses have been conducted in four separate laboratories for nearly a decade, and offer a unique opportunity to
examine genotyping errors and their consequences in an individual based DNA register. We re-genotyped 240
samples, and, for the first time, applied a mixed regression model to look at potentially confounding effects on
genotyping errors.
Results: The average genotyping error rate for the whole dataset was 0.013 per locus and 0.008 per allele. Errors
were, however, not evenly distributed. A decreasing trend across time was apparent, along with a strong withinsample
correlation, suggesting that error rates heavily depend on sample quality. In addition, some loci were more
error prone than others. False allele size constituted 18 of 31 observed errors, and the remaining errors were ten
false homozygotes (i.e., the true genotype was a heterozygote) and three false heterozygotes (i.e., the true
genotype was a homozygote).
Conclusions: To our knowledge, this study represents the first investigation of genotyping error rates in a wildlife
DNA register, and the first application of mixed models to examine multiple effects of different factors influencing
the genotyping quality. It was demonstrated that DNA registers accumulating data over time have the ability to
maintain calibration and genotyping consistency, despite analyses being conducted on different genotyping
platforms and in different laboratories. Although errors were detected, it is demonstrated that if the re-genotyping
of individual samples is possible, these will have a minimal effect on the database’s primary purpose, i.e., to
perform individual identification.
Wed, 20 Apr 2011 00:00:00 GMThttp://hdl.handle.net/1956/54802011-04-20T00:00:00ZLie–Butcher series and geometric numerical integration on manifolds
http://hdl.handle.net/1956/5436
Lie–Butcher series and geometric numerical integration on manifolds
Lundervold, Alexander
Doctoral thesis
The thesis belongs to the field of “geometric numerical integration” (GNI), whose
aim it is to construct and study numerical integration methods for differential equations
that preserve some geometric structure of the underlying system. Many systems
have conserved quantities, e.g. the energy in a conservative mechanical system
or the symplectic structures of Hamiltonian systems, and numerical methods
that take this into account are often superior to those constructed with the more
classical goal of achieving high order.
An important tool in the study of numerical methods is the Butcher series (Bseries)
invented by John Butcher in the 1960s. These are formal series expansions
indexed by rooted trees and have been used extensively for order theory and the
study of structure preservation. The thesis puts particular emphasis on B-series
and their generalization to methods for equations evolving on manifolds, called
Lie–Butcher series (LB-series).
It has become apparent that algebra and combinatorics can bring a lot of insight
into this study. Many of the methods and concepts are inherently algebraic or
combinatoric, and the tools developed in these fields can often be used to great
effect. Several examples of this will be discussed throughout. The thesis is structured as follows: background material on geometric numerical
integration is collected in Part I. It consists of several chapters: in Chapter 1 we
look at some of the main ideas of geometric numerical integration. The emphasis
is put on B-series, and the analysis of these. Chapter 2 is devoted to differential
equations evolving on manifolds, and the series corresponding to B-series in this
setting. Chapter 3 consists of short summaries of the papers included in Part II.
Part II is the main scientific contribution of the thesis, consisting of reproductions
of three papers on material related to geometric numerical integration.
Fri, 04 Nov 2011 00:00:00 GMThttp://hdl.handle.net/1956/54362011-11-04T00:00:00ZScroll codes over curves of higher genus
http://hdl.handle.net/1956/5336
Scroll codes over curves of higher genus
Johnsen, Trygve; Rasmussen, Nils Henry
Journal article; Peer reviewed
We construct linear codes from scrolls over curves of high genus and study the higher
support weights di of these codes. We embed the scroll into projective space Pk-1 and
calculate bounds for the di by considering the maximal number of Fq-rational points that are
contained in a codimension h subspace of Pk-1. We nd lower bounds of the di and for the
cases of large i calculate the exact values of the di.
This work follows the natural generalisation of Goppa codes to higher-dimensional varieties
as studied by S.H. Hansen, C. Lomont and T. Nakashima.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/53362010-01-01T00:00:00ZNumerical Simulation Studies of the Long-term Evolution of a CO2 Plume in a Saline Aquifer with a Sloping Caprock
http://hdl.handle.net/1956/5230
Numerical Simulation Studies of the Long-term Evolution of a CO2 Plume in a Saline Aquifer with a Sloping Caprock
Pruess, Karsten; Nordbotten, Jan Martin
Peer reviewed; Journal article
We have used the TOUGH2-MP/ECO2N code to perform numerical simulation
studies of the long-term behavior of CO2 stored in an aquifer with a sloping caprock. This
problem is of great practical interest, and is very challenging due to the importance of multiscale
processes.We find that the mechanism of plume advance is different from what is seen
in a forced immiscible displacement, such as gas injection into a water-saturated medium.
Instead of pushing the water forward, the plume advances because the vertical pressure gradients
within the plume are smaller than hydrostatic, causing the groundwater column to
collapse ahead of the plume tip. Increased resistance to vertical flow of aqueous phase in
anisotropic media leads to reduced speed of up-dip plume advancement. Vertical equilibrium
models that ignore effects of vertical flow will overpredict the speed of plume advancement.
The CO2 plume becomes thinner as it advances, but the speed of advancement remains constant
over the entire simulation period of up to 400 years, with migration distances of more
than 80 km. Our simulations include dissolution of CO2 into the aqueous phase and associated
density increase, and molecular diffusion. However, no convection develops in the
aqueous phase because it is suppressed by the relatively coarse (sub-) horizontal gridding
required in a regional-scale model. A first crude sub-grid-scale model was developed to represent
convective enhancement of CO2 dissolution. This process is found to greatly reduce
the thickness of the CO2 plume, but, for the parameters used in our simulations, does not
affect the speed of plume advancement.
Wed, 02 Feb 2011 00:00:00 GMThttp://hdl.handle.net/1956/52302011-02-02T00:00:00ZNew examples of curves with a one-dimensional family of pencils of minimal degree
http://hdl.handle.net/1956/5219
New examples of curves with a one-dimensional family of pencils of minimal degree
Rasmussen, Nils Henry
Peer reviewed; Journal article
We give a geometric construction of sub-linear systems on a K3
surface consisting of smooth curves C with infinitely many g1
gon(C)’s.
Thu, 14 Jul 2011 00:00:00 GMThttp://hdl.handle.net/1956/52192011-07-14T00:00:00ZOn Semi-implicit Splitting Schemes for the Beltrami Color Image Filtering
http://hdl.handle.net/1956/5210
On Semi-implicit Splitting Schemes for the Beltrami Color Image Filtering
Rosman, Guy; Dascal, Lorina; Tai, Xue-Cheng; Kimmel, Ron
Peer reviewed; Journal article
The Beltrami flow is an efficient nonlinear filter,
that was shown to be effective for color image processing.
The corresponding anisotropic diffusion operator strongly
couples the spectral components. Usually, this flow is implemented
by explicit schemes, that are stable only for
very small time steps and therefore require many iterations.
In this paper we introduce a semi-implicit Crank-Nicolson
scheme based on locally one-dimensional (LOD)/additive
operator splitting (AOS) for implementing the anisotropic
Beltrami operator. The mixed spatial derivatives are treated
explicitly, while the non-mixed derivatives are approximated in an implicit manner. In case of constant coefficients, the
LOD splitting scheme is proven to be unconditionally stable.
Numerical experiments indicate that the proposed scheme
is also stable in more general settings. Stability, accuracy,
and efficiency of the splitting schemes are tested in applications
such as the Beltrami-based scale-space, Beltrami denoising
and Beltrami deblurring. In order to further accelerate
the convergence of the numerical scheme, the reduced
rank extrapolation (RRE) vector extrapolation technique is
employed.
Sat, 15 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/52102011-01-15T00:00:00ZNumerical modelling of organic waste dispersion from fjord located fish farms
http://hdl.handle.net/1956/5209
Numerical modelling of organic waste dispersion from fjord located fish farms
Ali, Alfatih Mohammed A.; Thiem, Øyvind; Berntsen, Jarle
Peer reviewed; Journal article
In this study, a three-dimensional particle
tracking model coupled to a terrain following ocean
model is used to investigate the dispersion and the deposition
of fish farm particulate matter (uneaten food
and fish faeces) on the seabed due to tidal currents. The
particle tracking model uses the computed local flow
field for advection of the particles and random movement
to simulate the turbulent diffusion. Each particle
is given a settling velocity which may be drawn from
a probability distribution according to settling velocity
measurements of faecal and feed pellets. The results
show that the maximum concentration of organic waste
for fast sinking particles is found under the fish cage
and continue monotonically decreasing away from the
cage area. The maximum can split into two maximum
peaks located at both sides of the centre of the fish cage
area in the current direction. This process depends on
the sinking time (time needed for a particle to settle at
the bottom), the tidal velocity and the fish cage size. If the sinking time is close to a multiple of the tidal
period, the maximum concentration point will be under
the fish cage irrespective of the tide strength. This is
due to the nature of the tidal current first propagating
the particles away and then bringing them back when
the tide reverses. Increasing the cage size increases the
likelihood for a maximum waste accumulation beneath
the fish farm, and larger farms usually means larger biomasses
which can make the local pollution even more
severe. The model is validated by using an analytical
model which uses an exact harmonic representation
of the tidal current, and the results show an excellent
agreement. This study shows that the coupled ocean
and particle model can be used in more realistic applications
to help estimating the local environmental
impact due to fish farms.
Thu, 21 Apr 2011 00:00:00 GMThttp://hdl.handle.net/1956/52092011-04-21T00:00:00ZMultiscale mass conservative domain decomposition preconditioners for elliptic problems on irregular grids
http://hdl.handle.net/1956/5207
Multiscale mass conservative domain decomposition preconditioners for elliptic problems on irregular grids
Sandvin, Andreas; Nordbotten, Jan Martin; Aavatsmark, Ivar
Journal article; Peer reviewed
Multiscale methods can in many cases be
viewed as special types of domain decomposition preconditioners.
The localisation approximations introduced
within the multiscale framework are dependent
upon both the heterogeneity of the reservoir and the
structure of the computational grid. While previous
works on multiscale control volume methods have focused
on heterogeneous elliptic problems on regular
Cartesian grids, we have tested the multiscale control
volume formulations on two-dimensional elliptic problems
involving heterogeneous media and irregular grid
structures. Our study shows that the tangential flow
approximation commonly used within multiscale methods
is not suited for problems involving rough grids.
We present a more robust mass conservative domain
decomposition preconditioner for simulating flow in
heterogeneous porous media on general grids.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/52072011-06-01T00:00:00ZSequential Data Assimilation in High Dimensional Nonlinear Systems
http://hdl.handle.net/1956/5178
Sequential Data Assimilation in High Dimensional Nonlinear Systems
Stordal, Andreas Størksen
Doctoral thesis
Since its introduction in 1994, the ensemble Kalman filter (EnKF) has gained a lot
of attention as a tool for sequential data assimilation in many scientific areas. Due to
its computationally fast and easy to implement algorithm its popularity has increased
vastly over the last decades especially in the fields of oceanography and petroleum engineering.
Although EnKF has been successfully applied to many real world problems
it has a major drawback from a statistical point of view. The algorithm only converge to
the optimal solution if the system under consideration is linear and all random variables
describing the system are Gaussian.
There exist sequentialMonte Carlo methods (SMC) with correct asymptotic properties,
but both numerical and theoretical studies have shown that the number of samples
must increase exponentially with the dimension of the model to avoid a collapse of the
algorithm. For large scale geophysical systems, such as petroleum reservoirs or ocean
models, each sample requires the solution of a system of partial differential equations
on a large grid. The computational burden of solving these equations using numerical
schemes naturally puts an upper limit on the number of samples we can use in practice.
Hence these sequential Monte Carlo methods are not applicable, at least in their
simplest form, in large scale geophysical models.
This thesis explores the possibility of bridging the gap between EnKF and one of the
asymptotically correct SMC methods, known as particle filters, by extending already
known theory on Gaussian mixture filters. In addition a sensitivity analysis is carried
out for a new type of data in reservoir models.
A new approximative filter is developed by introducing an additional parameter in
the standard Gaussian mixture filter. The adaptive Gaussian mixture filter (AGM) consists
of two parameters and by choosing these differently the filter may run as EnKF, a
particle filter, or something in between. The method is tested on the Lorenz96 model for
comparison with EnKF and Gaussian mixture filters. Further comparison with EnKF is
made after running AGMon a 2D two-phase and a 3D three-phase petroleum reservoir.
We generalize AGMand compute error bounds and asymptotic properties using classical
approximation techniques before we explore the effects of estimating the first and
second order moments locally in Kalman type filters. By local estimation we mean local
in value and not in space. Two different approaches are suggested and applied to the
chaotic Lorenz63 model and a 1D reservoir model. Finally the sensitivity of reservoir
parameters to a new type of data, nanosensor observations, are calculated. Both analytical
and numerical results are provided. A simulation experiment is included from
which we can compute the resolution of the estimated parameters numerically.
Fri, 21 Oct 2011 00:00:00 GMThttp://hdl.handle.net/1956/51782011-10-21T00:00:00ZNumerical Methods for Conservation Laws with a Discontinuous Flux Function
http://hdl.handle.net/1956/5176
Numerical Methods for Conservation Laws with a Discontinuous Flux Function
Tveit, Svenn
Master thesis
When simulating two-phase flow in a porous medium, numerical methods are used to solve the equations of flow, called conservation laws. In the industry, this is done by a reservoir simulator, and the most widely used method is the Upstream Mobility scheme. It is useful to compare how this scheme solves the flow problem against academically accepted schemes, like Godunov's method and Engquist-Osher's method. To gain knowledge on the numerical approximations, the theory behind must be known, especially when dealing with spatial discontinuities. Only then will a comparision between numerical results be applicable for physical models. In this thesis we have investigated the theory of conservation laws with discontinuous flux functions, introduced a new scheme for this problem, Local Lax-Friedrichs, and compared the Upstream Mobility scheme against the Godunov, Engquist-Osher and Local Lax-Friedrichs scheme.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/51762011-06-01T00:00:00ZValue of different data types for estimation of permeability
http://hdl.handle.net/1956/5139
Value of different data types for estimation of permeability
Fossum, Kristian
Master thesis
The interrelation between sensitivity, non-linearity and scale, associated with the inverse problem of identifying permeability from measurements of fluid flow, is considered. The gouverning equations for fluid flow in a porous media is presented, and a measure of sensitivity and non-linearity is derived. By making some simplifications, this measure can be analyzed for different flow structures. The effects of the simplifications are then investigated separately. Finaly some numerical experiments are conducted. They highlight both the effects of the simplifications, and if the interrelation between sensitivity and scale carry over to a 2-D setting.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/51392011-06-01T00:00:00ZStatistiske modellar for Poissonregresjon med modifiserte null-sannsyn, ZIP og ZAP
http://hdl.handle.net/1956/5136
Statistiske modellar for Poissonregresjon med modifiserte null-sannsyn, ZIP og ZAP
Kårstad, Solveig
Master thesis
Oppgåva omhandlar modellane zero-inflated og zero-
altered Poissonfordeling, vanlegvis forkorta som
ZIP og ZAP. Dette er to regresjonsmodellar som
bruker Poisson som grunnleggjande fordeling, men
som tillet sannsynet for utfallet 0 å overstige
verdien ordinær Poissonfordeling gir denne
storleiken. Dei to modellane vart første gang
presenstert i si opprinnelige form i ein artikkel
av John Mullahy i 1986. Modellane har sidan vorte
meir og meir anvendt i praktisk regresjonsarbeid.
Særlig voks interessa for ZIP etter at Diane
Lambert i ein artikkel i 1992 vidareutvikla
Mullahy sin ZIP-modellen til ein meir generell og
anvendelig versjon. Oppgåva gir ein gjennomgang av
dei to artiklane. Det har etterkvart vorte utgitt
ein del artiklar som omhandlar ZIP og ZAP, men dei
fleste ser på modellane i forhold til Poisson og
ikkje i forhold til kvarandre. Dersom ZIP og ZAP
er samanlikna skjer dette som oftast ved at det
vert utført regresjon med begge modellane på same
datasett, og så samanliknar ein i etterkant AIC-
verdiane for å finne ut kven av dei to modellane
som best forklarar samanhengen mellom
responsvariabel og forklaringsvariablar. Det er
lite tilgjengeleg fagstoff som fortel korleis ein
på førehand kan vite kven av modellane ein bør
velje for analyse på eit gitt datasett. Det var
difor gjennom arbeidet med oppgåva forsøkt å finne
ut om val av modell faktisk har betydning for
påliteligheiten til estimeringa av
regresjonskoeffisientane, kva konsekvensen er ved
val av gal modell, og ikkje minst korleis me på
førehand kan vita kva som er den korrekte modellen
å bruke analysesituasjonen. Oppgåva innheld i
hovudsak ein teoretisk del og ein del med praktisk
analyse av ZIP og ZAP. I den første delen av
oppgåva, kapittel 1-5, får lesaren ei teoretisk
innføring i dei to modellane. Dette er naudsynt
kunnskap for å kunne forstå analysen av modellane
i den andre delen av oppgåva, kapittel 6-9.
Thu, 05 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/51362011-05-05T00:00:00ZLinear and Nonlinear Convection in Porous Media between Coaxial Cylinders
http://hdl.handle.net/1956/5123
Linear and Nonlinear Convection in Porous Media between Coaxial Cylinders
Bringedal, Carina
Master thesis
In this thesis we develop a mathematical model for describing three-dimensional natural convection in porous media filling a vertical annular cylinder. We apply a linear stability analysis to determine the onset of convection and the preferred convective mode when the annular cylinder is subject to two different types of boundary conditions: heat insulated sidewalls and heat conducting sidewalls. The case of an annular cylinder with insulated sidewalls has been investigated earlier, but our results reveal more details than previously found. We also investigate the case of the radius of the inner cylinder approaching zero and the results are compared with previous work for non-annular cylinders.
Using pseudospectral methods we have built a high-order numerical simulator to uncover the nonlinear regime of the convection cells. We study onset and geometry of convection modes, and look at the stability of the modes with respect to different types of perturbations. Also, we examine how variations in the Rayleigh number affects the convection modes and their stability regimes. We uncover an increased complexity regarding which modes that are obtained and we are able to identify stable secondary and mixed modes. We find the different convective modes to have overlapping stability regions depending on the Rayleigh number.
The motivation for studying natural convection in porous media is related to geothermal energy extraction and we attempt to determine the effect of convection cells in a geothermal heat reservoir. However, limitations in the simulator do not allow us to make any conclusions on this matter.
Mon, 02 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/51232011-05-02T00:00:00ZSome Regularity Results for Certain Weakly Quasiregular Mappings on the Heisenberg Group and Elliptic Equations
http://hdl.handle.net/1956/5122
Some Regularity Results for Certain Weakly Quasiregular Mappings on the Heisenberg Group and Elliptic Equations
Li, Qifan
Master thesis
The thesis is organized as follows. In chapter 1, we set up a higher integrability result for the horizontal part of certain weakly quasiregular maps on the Heisenberg group. Unlike the Euclidean case, the exponential of the integrability is not near the homogeneous dimension Q that is not analogous to the Euclidean setting. Chapter 2 is devoted to the study of self-improving regularity for certain subelliptic equations. The difficulty of this problem in the Carnot group is that the Whitney extension theorem and the main result in the Carnot group can be obtained only for fourth-order homogeneous subelliptic systems from the arguments in (J. L. Lewis, On the very weak solutions of certain elliptic systems, Comm. Part. Diff. Equ., 18 (1993), 1515-1537.). Since the p-sub-Laplace equation is a very special case of the nonlinear subelliptic equations we can establish a better result in this case via the arguments from (R. R. Coifman, P. L. Lions, Y. Meyer and S. Semmes, Compensated compactness and Hardy spaces, J. Math. Pures Appl., 72 (1993), 247-286.). Chapter 3 provides a discussion of selfimproving regularity for the degenerate elliptic equations in the Euclidean space. The main result of Chapter 3 extends a result of Lewis from (J. L. Lewis, On the very weak solutions of certain elliptic systems, Comm. Part. Diff. Equ., 18 (1993), 1515-1537.) to the degenerate elliptic systems. The proof relies on the weighted pointwise Sobolev inequality for higher order derivatives which is a useful tool in study of higher order degenerate elliptic systems.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/51222011-06-01T00:00:00ZSome demographic methods applied to urban and rural populations of Pakistan
http://hdl.handle.net/1956/5030
Some demographic methods applied to urban and rural populations of Pakistan
Faisal, Shahzad
Master thesis
Summary of Findings: In this thesis, first of all I have tried to describe what is demography and different ways to collect demographic data. Then, I have applied some of the demographic techniques to the population of Pakistan. Here are my findings: First of all, I have considered the infant mortality in Pakistan and applied the test of hypothesis along with 2 x 2 table to show that there is a difference of facilities/services given by the government to the urban and rural area's population and find out the results of z and chi-square tests with p-value. The results indicate that there is really a huge difference of policies between urban and rural areas of Pakistan and I have found the p-value 0.00001 which show our hypothesis is highly significant. I have noticed that since only the 35% of the population is residing in the urban areas but still urban areas are under consideration all the times while the rest 65% areas having the less attention by the government institutions. Secondly by using the data given by Federal Bureau of Statistics, Pakistan I have set up different life tables for the total population, urban and rural population and for the male and females population of Pakistan. The results show that the life expectancy at birth in urban area (68.7 years) is 6% higher than the rural areas (64.3 years). Similarly, the probability of dying at the first age interval is also 10% smaller in the urban area then the rural one (i.e. 0.06444 & 0.07197 for urban and rural respectively). Moreover, the female life expectancy at birth (68.4 years) is found to be 7% higher than the male life expectancy (64.3 years). Third, I have applied decomposition technique introduced Kitawaga (1955) to see how much of the difference between death rates in urban and rural population is attributable to differences in their age distributions. The results shows that the original difference between the urban and rural population is -0.00210 (by equation 7.2) while the contribution of age compositional differences and contributions of age specific rate differences are -0.00052807 & -0.00157492 respectively (by equation 7.3). Further, the proportion of difference attributable to differences in age composition is found to be 25% whereas the proportion of difference attributable to differences in rate schedules is 75% which shows that both parts are contributing in the same direction to the difference. Lastly I have tried to make a population forecasting for Pakistan. For this, a few methods has been discussed and have made a forecast by using the compound rate of growth method and cohort component method. According to the first method, it shows that there might be 294.96 million population in the year 2032 (equation 8.13) whereas the second method states that it might be 258.09 million in the year 2031 (equation 8.14). It seems reasonable to say that the estimates found by the cohort component method are more reliable than the any other method as the cohort component is now the only method on which demographers are relying much.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/50302011-06-01T00:00:00ZReconstructing Open Surfaces via Graph-Cuts
http://hdl.handle.net/1956/5024
Reconstructing Open Surfaces via Graph-Cuts
Wan, Min; Wang, Yu; Bae, Egil; Tai, Xue-Cheng; Wang, Desheng
A novel graph-cuts-basedmethod is proposed
for reconstructing open surfaces from unordered point
sets. Through a boolean operation on the crust around
the data set, the open surface problem is translated
to a watertight surface problem within a restricted region.
Integrating the variationalmodel, Delaunay-based
tetrahedralmesh framework and multi-phase technique,
the proposed method can reconstruct open surfaces robustly
and effectively. Furthermore, a surface reconstruction
method with domain decomposition is presented,
which is based on the new open surface reconstruction
method. This method can handle more general
surfaces, such as non-orientable surfaces. The algorithm
is designed in a parallel-friendly way and necessary
measures are taken to eliminate cracks at the interface
between the subdomains. Numerical examples
are included to demonstrate the robustness and effectiveness
of the proposed method on watertight, open
orientable, open non-orientable surfaces and combinations
of such.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/50242011-01-01T00:00:00ZA Continuous Max-Flow Approach to Minimal Partitions with Label Cost Prior
http://hdl.handle.net/1956/5023
A Continuous Max-Flow Approach to Minimal Partitions with Label Cost Prior
Yuan, Jing; Bae, Egil; Boykov, Yuri; Tai, Xue-Cheng
Chapter; Peer reviewed
This paper investigates a convex relaxation approach for
minimum description length (MDL) based image partitioning or labeling,
which proposes an energy functional regularized by the spatial smoothness
prior joint with a penalty for the total number of appearences or
labels, the so-called label cost prior. As common in recent studies of convex
relaxation approaches, the total-variation term is applied to encode
the spatial regularity of partition boundaries and the auxiliary label cost
term is penalized by the sum of convex infinity norms of the labeling
functions. We study the proposed such convex MDL based image partition
model under a novel continuous flow maximization perspective,
where we show that the label cost prior amounts to a relaxation of the
flow conservation condition which is crucial to study the classical duality
of max-flow and min-cut! To the best of our knowledge, it is new
to demonstrate such connections between the relaxation of flow conservation
and the penalty of the total number of active appearences. In
addition, we show that the proposed continuous max-flow formulation
also leads to a fast and reliable max-flow based algorithm to address
the challenging convex optimization problem, which significantly outperforms
the previous approach by direct convex programming, in terms
of speed, computation load and handling large-scale images. Its numerical
scheme can by easily implemented and accelerated by the advanced
computation framework, e.g. GPU.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/50232011-01-01T00:00:00ZA Fast Continuous Max-Flow Approach to Non-Convex Multilabeling Problems
http://hdl.handle.net/1956/5021
A Fast Continuous Max-Flow Approach to Non-Convex Multilabeling Problems
Bae, Egil; Yuan, Jing; Tai, Xue-Cheng; Boykov, Yuri
This work addresses a class of multilabeling problems over a spatially continuous image domain,
where the data fidelity term can be any bounded function, not necessarily convex. Two total variation based regularization
terms are considered, the first favoring a linear relationship between the labels and the second independent
of the label values (Pott’s model). In the spatially discrete setting, Ishikawa [33] showed that the first of these labeling
problems can be solved exactly by standard max-flow and min-cut algorithms over specially designed graphs.
We will propose a continuous analogue of Ishikawa’s graph construction [33] by formulating continuous max-flow
and min-cut models over a specially designed domain. These max-flow and min-cut models are equivalent under a
primal-dual perspective. They can be seen as exact convex relaxations of the original problem and can be used to
compute global solutions. Fast continuous max-flow based algorithms are proposed based on the max-flow models
whose efficiency and reliability can be validated by both standard optimization theories and experiments. In comparison
to previous work [53, 52] on continuous generalization of Ishikawa’s construction, our approach differs in
the max-flow dual treatment which leads to the following main advantages: A new theoretical framework which
embeds the label order constraints implicitly and naturally results in optimal labeling functions taking values in any
predefined finite label set; A more general thresholding theorem which allows to produce a larger set of non-unique
solutions to the original problem; Numerical experiments show the new max-flow algorithms converge faster than
the fast primal-dual algorithm of [53, 52]. The speedup factor is especially significant at high precisions. In the
end, our dual formulation and algorithms are extended to a recently proposed convex relaxation of Pott’s model [50],
thereby avoiding expensive iterative computations of projections without closed form solution.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/50212011-01-01T00:00:00ZA Study on Continuous Max-Flow and Min-Cut Approaches
http://hdl.handle.net/1956/5020
A Study on Continuous Max-Flow and Min-Cut Approaches
Yuan, Jing; Bae, Egil; Tai, Xue-Cheng
Chapter; Peer reviewed
We propose and investigate novel max-flow models in the spatially continuous setting, with or without
supervised constraints, under a comparative study of graph based max-flow / min-cut. We show that the continuous
max-flow models correspond to their respective continuous min-cut models as primal and dual problems, and the
continuous min-cut formulation without supervision constraints regards the well-known Chan-Esedoglu-Nikolova
model [15] as a special case. In this respect, basic conceptions and terminologies applied by discrete max-flow / mincut
are revisited under a new variational perspective. We prove that the associated nonconvex partitioning problems,
unsupervised or supervised, can be solved globally and exactly via the proposed convex continuous max-flow and
min-cut models. Moreover, we derive novel fast max-flow based algorithms whose convergence can be guaranteed
by standard optimization theories. Experiments on image segmentation, both unsupervised and supervised, show
that our continuous max-flow based algorithms outperform previous approaches in terms of efficiency and accuracy.
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. (An extended journal version).
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/50202010-01-01T00:00:00ZGlobal Minimization for Continuous Multiphase Partitioning Problems Using a Dual Approach
http://hdl.handle.net/1956/5019
Global Minimization for Continuous Multiphase Partitioning Problems Using a Dual Approach
Bae, Egil; Yuan, Jing; Tai, Xue-Cheng
Peer reviewed; Journal article
This paper is devoted to the optimization problem
of continuous multi-partitioning, or multi-labeling, which is
based on a convex relaxation of the continuous Potts model.
In contrast to previous efforts, which are tackling the optimal
labeling problem in a direct manner, we first propose a
novel dual model and then build up a corresponding dualitybased
approach. By analyzing the dual formulation, sufficient
conditions are derived which show that the relaxation
is often exact, i.e. there exists optimal solutions that are also
globally optimal to the original nonconvex Potts model. In
order to deal with the nonsmooth dual problem, we develop
a smoothing method based on the log-sum exponential function
and indicate that such a smoothing approach leads to a
novel smoothed primal-dual model and suggests labelings
with maximum entropy. Such a smoothing method for the
dual model also yields a new thresholding scheme to obtain approximate solutions. An expectation maximization
like algorithm is proposed based on the smoothed formulation
which is shown to be superior in efficiency compared to
earlier approaches from continuous optimization. Numerical
experiments also show that our method outperforms several
competitive approaches in various aspects, such as lower energies
and better visual quality.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/50192010-01-01T00:00:00ZEfficient Global Minimization for the Multiphase Chan-Vese Model of Image Segmentation
http://hdl.handle.net/1956/5018
Efficient Global Minimization for the Multiphase Chan-Vese Model of Image Segmentation
Bae, Egil; Tai, Xue-Cheng
Peer reviewed; Journal article
The Mumford-Shah model is an important variational image segmentation
model. A popular multiphase level set approach, the Chan-Vese
model, was developed as a numerical realization by representing the phases
by several overlapping level set functions. Recently, a variant representation
of the Chan-Vese model with binary level set functions was proposed.
In both approaches, the gradient descent equations had to be solved numerically,
a procedure which is slow and has the potential of getting stuck
in a local minima.
In this work, we develop an efficient and global minimization method
for a discrete version of the level set representation of the Chan-Vese model
with 4 regions (phases), based on graph cuts. If the average intensity values
of the different regions are sufficiently evenly distributed, the energy
function is submodular. It is shown theoretically and experimentally that
the condition is expected to hold for the most commonly used data terms.
We have also developed a method for minimizing nonsubmodular functions,
that can produce global solutions in practice should the condition
not be satisfied, which may happen for the L1 data term.
In: Proc. Seventh International Conference on EnergyMinimization Methods in Computer Vision and Pattern Recognition (EMMCVPR
2009), pg.28-41, Lecture Notes in Computer Science, Springer, Berlin, 2009. (Extended journal version).
Thu, 01 Jan 2009 00:00:00 GMThttp://hdl.handle.net/1956/50182009-01-01T00:00:00ZEfficient global minimization methods for variational problems in imaging and vision
http://hdl.handle.net/1956/5017
Efficient global minimization methods for variational problems in imaging and vision
Bae, Egil
Doctoral thesis
Energy minimization has become one of the most important paradigms for formulating
image processing and computer vision problems in a mathematical language.
Energy minimization models have been developed in both the variational
and discrete optimization community during the last 20-30 years. Some models
have established themselves as fundamentally important and arise over a wide
range of applications.
One fundamental challenge is the optimization aspect. The most desirable
models are often the most difficult to handle from an optimization perspective.
Continuous optimization problems may be non-convex and contain many inferior
local minima. Discrete optimization problems may be NP-hard, which means
algorithms are unlikely to exist which can always compute exact solutions without
an unreasonable amount of effort.
This thesis contributes with efficient optimization methods which can compute
global or close to global solutions to important energy minimization models in
imaging and vision. New insights are given in both continuous and combinatorial
optimization, as well as a strengthening of the relationships between these fields.
One problem that is extensively studied is minimal perimeter partitioning
problems with several regions, which arise naturally in e.g. image segmentation
applications and is NP-hard in the discrete context. New methods are developed
that can often compute global solutions and otherwise very close approximations
to global solutions. Experiments show the new methods perform significantly
better than earlier variational approaches, like the level set method, and earlier
combinatorial optimization approaches. The new algorithms are significantly
faster than previous continuous optimization approaches.
In the discrete community, max-flow and min-cut (graph cuts) have gained
huge popularity because they can efficiently compute global solutions to certain
energy minimization models. It is shown that new types of problems can be solved
exactly by max-flow and min-cut. Furthermore, variational generalizations of
max-flow and min-cut are proposed which bring the global optimization property
to the continuous setting, while avoiding grid bias and metrication errors which
are major disadvantages of the discrete models. Convex optimization algorithms
are derived from the variational max-flow models, which are very efficient and
are more parallel friendly than traditional combinatorial algorithms.
Fri, 19 Aug 2011 00:00:00 GMThttp://hdl.handle.net/1956/50172011-08-19T00:00:00ZSub-Riemannian geometry of spheres and rolling of manifolds
http://hdl.handle.net/1956/4822
Sub-Riemannian geometry of spheres and rolling of manifolds
Molina, Mauricio Godoy
Doctoral thesis
Fri, 29 Apr 2011 00:00:00 GMThttp://hdl.handle.net/1956/48222011-04-29T00:00:00ZPotensialet for dyp geotermisk energi i Norge: Modellering av varmeutvinning fra konstruerte geotermiske system
http://hdl.handle.net/1956/4726
Potensialet for dyp geotermisk energi i Norge: Modellering av varmeutvinning fra konstruerte geotermiske system
Brun, Knut-Erland
Master thesis
Dette prosjektet tar for seg to ulike modeller for et konstruert geotermisk system; et lukket enkeltbrønn-system og et åpent, sprekkdominert system. Enkeltbrønn-systemet er modellert eksplisitt, og modellen er også tilpasset to kommersielle system bl.a. Rock Energy sine planer tilknyttet fjernvarme utenfor Oslo. Modellen er undersøkt både i et kort tidsperspektiv (500 dager) og et lengre tidsperspektiv (50 år). Det åpne systemet er forsøkt modellert som et homogent porøst medium, hvor porøsitet og permeabilitet er bestem ut fra egenskaper ved sprekkene. Imidlertid har denne modellen vist seg langt vanskeligere å utforme enn først antatt, med problemer knyttet til stabilisering av de tidsavhengige temperaturligningene. Disse problemene har ikke latt seg løse innenfor tidsrammen til prosjektet.
Resultatene fra enkeltbrønnmodellen viser at for produksjonsrater i størrelsesorden 3-10 kg/s, vil man ved aktuelle geotermiske forhold og injeksjonstemperatur på 40 grader C, produsere vann med en temperatur ca 40-50 grader under reservoartemperatur. Dette tilsvarer termisk effekt på 0.1-0.5 MW. Ved høyere injeksjonstemperatur (60 grader C) er det mulig å produsere ved temperaturer opp mot 75 grader C i et lengre tidsperspektiv, men den termiske effekten blir da lavere; 0.1-0.4 MW.
Tue, 05 Oct 2010 00:00:00 GMThttp://hdl.handle.net/1956/47262010-10-05T00:00:00ZTwo Level Additive Schwarz Preconditioner For Control Volume Finite Element Methods
http://hdl.handle.net/1956/4725
Two Level Additive Schwarz Preconditioner For Control Volume Finite Element Methods
Gundersen, Kristian
Master thesis
In this thesis we investigate nummerically the convergence properties of the control volume finite element method (CVFEM) preconditioned with a two level overlapping additive Schwarz method. Relevant theory regarding the CVFEM, the Schwarz framwork and the iterative solver Genral Minimal Residual Method is explained.
Wed, 22 Sep 2010 00:00:00 GMThttp://hdl.handle.net/1956/47252010-09-22T00:00:00ZHierarchical Bayesian Survival Analysis of Age-Specific Data From Birds' Nests
http://hdl.handle.net/1956/4724
Hierarchical Bayesian Survival Analysis of Age-Specific Data From Birds' Nests
Willgohs, Niejing
Master thesis
In this thesis, I first present the grassland birds data from Wells(2007) which is used by several different methods of estimating the nest survival rates. The hierarchical Bayesian method from Cao(2009) then is introduced as a new model to estimate nest-specific survival rates with double censored, left-truncated data. I compare two methods and during the comparison, cox-proportional model and intrinsic autoregressive prior are studied
In the second half of this thesis, different data analysis methods are introduced, the deviance information criterion is presented and the Bayesian method is compared with the Mayfield method.
The hierarchical Bayesian method is relatively new and is a complicated model indeed for those people who are not familiar with the Bayesian and higher dimension of integration. Nevertheless, it is still a valuable statistical tool. The deviance information criterion is a new method of analyses data; users could choose the different priors in order to get different estimating results, therefore it is very applicable in the statistical world.
Fri, 19 Nov 2010 00:00:00 GMThttp://hdl.handle.net/1956/47242010-11-19T00:00:00ZHistological type and grade of breast cancer tumors by parity, age at birth, and time since birth: a register-based study in Norway
http://hdl.handle.net/1956/4684
Histological type and grade of breast cancer tumors by parity, age at birth, and time since birth: a register-based study in Norway
Albrektsen, Grethe; Heuch, Ivar; Thoresen, Steinar Ø.
Peer reviewed; Journal article
Background
Some studies have indicated that reproductive factors affect the risk of histological types of breast cancer differently. The long-term protective effect of a childbirth is preceded by a short-term adverse effect. Few studies have examined whether tumors diagnosed shortly after birth have specific histological characteristics.
Methods
In the present register-based study, comprising information for 22,867 Norwegian breast cancer cases (20-74 years), we examined whether histological type (9 categories) and grade of tumor (2 combined categories) differed by parity or age at first birth. Associations with time since birth were evaluated among 9709 women diagnosed before age 50 years. Chi-square tests were applied for comparing proportions, whereas odds ratios (each histological type vs. ductal, or grade 3-4 vs. grade 1-2) were estimated in polytomous and binary logistic regression analyses.
Results
Ductal tumors, the most common histological type, accounted for 81.4% of all cases, followed by lobular tumors (6.3%) and unspecified carcinomas (5.5%). Other subtypes accounted for 0.4%-1.5% of the cases each. For all histological types, the proportions differed significantly by age at diagnoses. The proportion of mucinous and tubular tumors decreased with increasing parity, whereas Paget disease and medullary tumors were most common in women of high parity. An increasing trend with increasing age at first birth was most pronounced for lobular tumors and unspecified carcinomas; an association in the opposite direction was seen in relation to medullary and tubular tumors. In age-adjusted analyses, only the proportions of unspecified carcinomas and lobular tumors decreased significantly with increasing time since first and last birth. However, ductal tumors, and malignant sarcomas, mainly phyllodes tumors, seemed to occur at higher frequency in women diagnosed <2 years after first childbirth. The proportions of medullary tumors and Paget disease were particularly high among women diagnosed 2-5 years after last birth. The high proportion of poorly differentiated tumors in women with a recent childbirth was partly explained by young age.
Conclusion
Our results support previous observations that reproductive factors affect the risk of histological types of breast cancer differently. Sarcomas, medullary tumors, and possible also Paget disease, may be particularly susceptible to pregnancy-related exposure.
Fri, 21 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/46842010-05-21T00:00:00ZArachnoid cysts do not contain cerebrospinal fluid: A comparative chemical analysis of arachnoid cyst fluid and cerebrospinal fluid in adults
http://hdl.handle.net/1956/4674
Arachnoid cysts do not contain cerebrospinal fluid: A comparative chemical analysis of arachnoid cyst fluid and cerebrospinal fluid in adults
Berle, Magnus; Wester, Knut; Ulvik, Rune Johan; Kroksveen, Ann Cathrine; Haaland, Øystein; Amiry-Moghaddam, Mahmood; Berven, Frode S.
Journal article; Peer reviewed
Background
Arachnoid cyst (AC) fluid has not previously been compared with cerebrospinal fluid (CSF) from the same patient. ACs are commonly referred to as containing "CSF-like fluid". The objective of this study was to characterize AC fluid by clinical chemistry and to compare AC fluid to CSF drawn from the same patient. Such comparative analysis can shed further light on the mechanisms for filling and sustaining of ACs.
Methods
Cyst fluid from 15 adult patients with unilateral temporal AC (9 female, 6 male, age 22-77y) was compared with CSF from the same patients by clinical chemical analysis.
Results
AC fluid and CSF had the same osmolarity. There were no significant differences in the concentrations of sodium, potassium, chloride, calcium, magnesium or glucose. We found significant elevated concentration of phosphate in AC fluid (0.39 versus 0.35 mmol/L in CSF; p = 0.02), and significantly reduced concentrations of total protein (0.30 versus 0.41 g/L; p = 0.004), of ferritin (7.8 versus 25.5 ug/L; p = 0.001) and of lactate dehydrogenase (17.9 versus 35.6 U/L; p = 0.002) in AC fluid relative to CSF.
Conclusions
AC fluid is not identical to CSF. The differential composition of AC fluid relative to CSF supports secretion or active transport as the mechanism underlying cyst filling. Oncotic pressure gradients or slit-valves as mechanisms for generating fluid in temporal ACs are not supported by these results.
Thu, 10 Jun 2010 00:00:00 GMThttp://hdl.handle.net/1956/46742010-06-10T00:00:00ZSub-semi-Riemannian geometry on Heisenberg-type groups
http://hdl.handle.net/1956/4604
Sub-semi-Riemannian geometry on Heisenberg-type groups
Korolko, Anna
Doctoral thesis
Fri, 04 Feb 2011 00:00:00 GMThttp://hdl.handle.net/1956/46042011-02-04T00:00:00ZLattice-Boltzmann modelling of spatial variation in surface tension and wetting effects
http://hdl.handle.net/1956/4582
Lattice-Boltzmann modelling of spatial variation in surface tension and wetting effects
Stensholt, Sigvat Kuekiatngam
Doctoral thesis
The main focus of the project which is the foundation of the thesis is to study
the effects spatial variations in capillary effects have on fluid flow. This requires
knowledge of the underlying fluid flow equations, and some sort of numerical
method to handle numerical simulations.
The first part of this thesis deals with the underlying theory of the methods.
Classical fluid theory has a brief introduction, followed by the lattice Boltzmann
theory and methods for two phase flow and some basic simulations to demonstrate
how the two-phase flow methods work. Papers which have been submitted or
presented make up the second part of the thesis.
Chapter 1 in the thesis covers classical fluid flow. A particular emphasis is
placed on the capillary effects of surface tension, and wetting. The chapter also
covers theory on surfactants and their applications.
Chapter 2 covers the foundations of the lattice Boltzmann method which I used
throughout the project. The methods are a fairly new way of simulating fluid flow,
even though it is based on the century old Boltzmann equation. The underlying
algorithms are simple, but the simulated flow can be quite complex.
Chapter 3 covers how the lattice Boltzmann method deals with more complex
flow. This includes multiphase flow and flow with solute components. Variations
in surface tension are also covered.
Chapter 4 illustrates some of the properties found in the multiphase lattice
Boltzmann methods used in this thesis.
Chapter 5 is a summary of the work conducted. A short description of the
papers included in the second part of this thesis is provided. The chapter also
considers further extensions and possible improvements to the work in this thesis.
Thu, 26 Aug 2010 00:00:00 GMThttp://hdl.handle.net/1956/45822010-08-26T00:00:00ZRobust Control Volume Methods for Reservoir Simulation on Challenging Grids
http://hdl.handle.net/1956/4577
Robust Control Volume Methods for Reservoir Simulation on Challenging Grids
Keilegavlen, Eirik
Doctoral thesis
Fri, 19 Feb 2010 00:00:00 GMThttp://hdl.handle.net/1956/45772010-02-19T00:00:00ZOn Semi-implicit Splitting Schemes for the Beltrami Color Image Filtering
http://hdl.handle.net/1956/4541
On Semi-implicit Splitting Schemes for the Beltrami Color Image Filtering
Rosman, Guy; Dascal, Lorina; Tai, Xue-Cheng; Kimmel, Ron
Peer reviewed; Journal article
The Beltrami flow is an efficient nonlinear filter, that was shown to be effective for color image processing. The corresponding anisotropic diffusion operator strongly couples the spectral components. Usually, this flow is implemented by explicit schemes, that are stable only for very small time steps and therefore require many iterations. In this paper we introduce a semi-implicit Crank-Nicolson scheme based on locally one-dimensional (LOD)/additive operator splitting (AOS) for implementing the anisotropic Beltrami operator. The mixed spatial derivatives are treated explicitly, while the non-mixed derivatives are approximated in an implicit manner. In case of constant coefficients, the LOD splitting scheme is proven to be unconditionally stable. Numerical experiments indicate that the proposed scheme is also stable in more general settings. Stability, accuracy, and efficiency of the splitting schemes are tested in applications such as the Beltrami-based scale-space, Beltrami denoising and Beltrami deblurring. In order to further accelerate the convergence of the numerical scheme, the reduced rank extrapolation (RRE) vector extrapolation technique is employed.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/45412011-01-01T00:00:00ZBridging the ensemble Kalman filter and particle filters: the adaptive Gaussian mixture filter
http://hdl.handle.net/1956/4540
Bridging the ensemble Kalman filter and particle filters: the adaptive Gaussian mixture filter
Stordal, Andreas Størksen; Karlsen, Hans A.; Nævdal, Geir; Skaug, Hans J.; Vallès, Brice
Peer reviewed; Journal article
The nonlinear filtering problem occurs in many scientific areas. Sequential Monte Carlo solutions with the correct asymptotic behavior such as particle filters exist, but they are computationally too expensive when working with high-dimensional systems. The ensemble Kalman filter (EnKF) is a more robust method that has shown promising results with a small sample size, but the samples are not guaranteed to come from the true posterior distribution. By approximating the model error with a Gaussian distribution, one may represent the posterior distribution as a sum of Gaussian kernels. The resulting Gaussian mixture filter has the advantage of both a local Kalman type correction and the weighting/resampling step of a particle filter. The Gaussian mixture approximation relies on a bandwidth parameter which often has to be kept quite large in order to avoid a weight collapse in high dimensions. As a result, the Kalman correction is too large to capture highly non-Gaussian posterior distributions. In this paper, we have extended the Gaussian mixture filter (Hoteit et al., Mon Weather Rev 136:317–334, 2008) and also made the connection to particle filters more transparent. In particular, we introduce a tuning parameter for the importance weights. In the last part of the paper, we have performed a simulation experiment with the Lorenz40 model where our method has been compared to the EnKF and a full implementation of a particle filter. The results clearly indicate that the new method has advantages compared to the standard EnKF.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/45402010-01-01T00:00:00Z