Department of Mathematics
http://hdl.handle.net/1956/1065
Sat, 23 May 2015 06:08:53 GMT2015-05-23T06:08:53ZEntropy solutions to the Compressible Euler Equations
http://hdl.handle.net/1956/9868
Entropy solutions to the Compressible Euler Equations
Svärd, Magnus
Journal article
We consider the three-dimensional Euler equations of gas dynamics
on a bounded periodic domain and a bounded time interval. We prove that
Lax-Friedrichs scheme can be used to produce a sequence of solutions with
ever finer resolution for any appropriately bounded (but not necessarily small)
initial data. Furthermore, we show that if the density remains strictly positive
in the sequence of solutions at hand, a subsequence converges to an entropy
solution. We provide numerical evidence for these results by computing a
sensitive Kelvin-Helmholtz problem.
Mon, 11 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/98682015-05-11T00:00:00ZSUCCESS: SUbsurface CO2 storage–Critical elements and superior strategy
http://hdl.handle.net/1956/9794
SUCCESS: SUbsurface CO2 storage–Critical elements and superior strategy
Aker, Eyvind; Bjørnarå, Tore Ingvald; Braathen, Alvar; Brandvoll, Øyvind; Dahle, Helge K.; Nordbotten, Jan Martin; Aagaard, Per; Hellevang, Helge; Alemu, Binyam Lema; Pham, Van Thi Hai; Johansen, Harald; Wangen, Magnus; Nøttvedt, Arvid; Aavatsmark, Ivar; Johannessen, Truls; Durand, Dominique
Journal article
SUCCESS is a Center for Environmental Energy Research in Norway and performs research related to geological storage of CO2 in the subsurface. The SUCCESS centre is established by the Research Council of Norway together with several Norwegian research institutes and universities. The centre is hosted by Christian Michelsen Research. Through international cooperation and open research the SUCCESS centre will fill gaps in strategic knowledge and provide a system for learning and development of new competency to ensure safe and effective CO2 injection, storage and monitoring. In this paper we briefly present the main focus areas of the centre and some recent results obtained by the research partners. The results relate to geochemical effects, reservoir modeling, monitoring the geomechanical respond and the marine environment. A brief status on the field trial, Longyearbyen CO2 Lab, at Svalbard is also provided.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97942011-01-01T00:00:00ZField-case simulation of CO2-plume migration using vertical-equilibrium models
http://hdl.handle.net/1956/9791
Field-case simulation of CO2-plume migration using vertical-equilibrium models
Nilsen, Halvor Møll; Herrera, Paulo; Ashraf, Seyed Meisam; Ligaard, Ingeborg S.; Iding, Martin; Hermanrud, Christian; Lie, Knut-Andreas; Nordbotten, Jan Martin; Dahle, Helge K.; Keilegavlen, Eirik
Journal article
When injected in deep saline aquifers, CO2 moves radially away from the injection well and progressively higher in the formation because of buoyancy forces. Analyzes have shown that after the injection period, CO2 will potentially migrate over several kilometers in the horizontal direction but only tens of meters in the vertical direction, limited by the aquifer caprock. Because of the large horizontal plume dimensions, three-dimensional numerical simulations of the plume migration over long periods of time are computationally intensive. Thus, to get results within a reasonable time frame, one is typically forced to use coarse meshes and long time steps which result in inaccurate results because of numerical errors in resolving the plume tip.
Given the large aspect ratio between the vertical and horizontal plume dimensions, it is reasonable to approximate the CO2 migration using vertically averaged models. Such models can, in many cases, be more accurate than coarse three-dimensional computations. In particular, models based on vertical equilibrium (VE) are attractive to simulate the long-term fate of CO2 sequestered into deep saline aquifers. The reduced spatial dimensionality resulting from the vertical integration ensures that the computational performance of VE models exceeds the performance of standard three-dimensional models. Thus, VE models are suitable to study the long-time and large-scale behavior of plumes in real large-scale CO2-injection projects. We investigate the use of VE models to simulate CO2 migration in a real large-scale field case based on data from the Sleipner site in the North Sea. We discuss the potential and limitations of VE models and show how VE models can be used to give reliable estimates of long-term CO2 migration. In particular, we focus on a VE formulation that incorporates the aquifer geometry and heterogeneity, and that considers the effects of hydrodynamic and residual trapping. We compare the results of VE simulations with standard reservoir simulation tools on test cases and discuss their advantages and limitations and show how, provided that certain conditions are met, they can be used to give reliable estimates of long-term CO2 migration.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97912011-01-01T00:00:00ZUpscaled modeling of CO2 injection and migration with coupled thermal processes
http://hdl.handle.net/1956/9747
Upscaled modeling of CO2 injection and migration with coupled thermal processes
Gasda, Sarah Eileen; Stephansen, Annette; Aavatsmark, Ivar; Dahle, Helge K.
Journal article
A practical modeling approach for CO2 storage over relatively large length and time scales is the vertical-equilibrium
model, which solves partially integrated conservation equations for flow in two lateral dimensions. We couple heat
transfer within the vertical equilibrium framework for fluid flow, focusing on thermal processes that most impact the
CO2 plume. We investigate a simplified representation of heat exchange that also includes transport of heat within
the plume. In addition, we explore available CO2 thermodynamic models for reliable prediction of density under
different injection pressures and temperatures. The model concept is demonstrated on simplified systems.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/97472013-01-01T00:00:00ZAssessing Model Uncertainties Through Proper Experimental Design
http://hdl.handle.net/1956/9745
Assessing Model Uncertainties Through Proper Experimental Design
Hvidevold, Hilde Kristine; Alendal, Guttorm; Johannessen, Truls; Mannseth, Trond
Journal article
This paper assesses how parameter uncertainties in the model for rise velocity of CO2 droplets in the ocean cause uncertainties in their rise and dissolution in marine waters. The parameter uncertainties in the rise velocity for both hydrate coated and hydrate free droplets are estimated from experiment data. Thereafter the rise velocity is coupled with a mass transfer model to simulate the fate of dissolution of a single droplet.
The assessment shows that parameter uncertainties are highest for large droplets. However, it is also shown that in some circumstances varying the temperature gives significant change in rise distance of droplets.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/97452013-01-01T00:00:00ZRisk of Leakage versus Depth of Injection in Geological Storage
http://hdl.handle.net/1956/9724
Risk of Leakage versus Depth of Injection in Geological Storage
Celia, Michael A.; Nordbotten, Jan Martin; Bachu, Stefan; Dobossy, Mark E.; Court, Benjamin
Journal article
One of the outstanding challenges for large-scale CCS operations is to develop reliable quantitative risk assessments with a focus on leakage of both injected CO2 and displaced brine. A critical leakage pathway is associated with the century-long legacy of oil and gas exploration and production, which has led to many millions of wells being drilled. Many of those wells are in locations that would otherwise be excellent candidates for CCS operations, especially across many parts of North America. Quantitative analysis of the problem requires special computational techniques because of the unique challenges associated with simulation of injection and leakage in systems that include hundreds or thousands of existing wells over domains characterized by layered structures in the vertical direction and very large horizontal extent. An important feature of these kinds of systems is the depth of each well, and the fact that the number of wells penetrating different formations decreases as a function of depth. As such, one might reasonably expect the risk of leakage to decrease with depth of injection. With the special computational models developed to simulate injection and leakage along multiple wells, in layered systems with multiple formations, quantitative assessment of risk reduction as a function of injection depth can be made. An example of such a system corresponds to the Wabamun Lake area southwest of Edmonton, Alberta, Canada, where several large coal-fired power plants are located. Use of information about both the existing wells and the local stratigraphy allows a realistic model to be constructed. Leakage along existing wells is assumed to follow Darcy’s Law, and is characterized by a set of effective permeability values. These values are assigned stochastically, using several different methods, within a Monte Carlo simulation framework. Computational results show the clear trade-off between depth of injection and risk of leakage. The results also show how properties within the different formations affect the risk profiles. In the Wabamun Lake area, one of the formations has the highest injectivity, by far, while having a moderate number of existing wells. Its moderate risk of leakage, as compared to injections in formations above and below, shows some of the key factors that are likely to influence injection design for large-scale CCS operations.
Sun, 01 Feb 2009 00:00:00 GMThttp://hdl.handle.net/1956/97242009-02-01T00:00:00ZActive and integrated management of water resources throughout CO2 capture and sequestration operations
http://hdl.handle.net/1956/9720
Active and integrated management of water resources throughout CO2 capture and sequestration operations
Court, Benjamin; Celia, Michael A.; Nordbotten, Jan Martin; Elliot, Thomas R.
Journal article
Most projected climate change mitigation strategies will require a significant expansion of CO2 Capture and Sequestration (CCS) in the next two decades. Four major categories of challenges are being actively researched: CO2 capture cost, geological sequestration safety, legal and regulatory barriers, and public acceptance. Herein we propose an additional major challenge category across all CCS operations: Water management. For example a coal-fired power plant retrofitted for CCS requires twice as much cooling water as the original plant. This increased demand may be accommodated by brine extraction and treatment, which would concurrently function as large-scale pressure management and a potential source of freshwater. At present the interactions among freshwater extraction, CO2 injection, and brine management are being considered too narrowly -in the case of freshwater almost completely overlooked- in the technical and regulatory CCS community. This paper presents an overview of each of these challenges and potential integration opportunities. Active management of CCS operations through an integrated approach -including brine production, treatment, use for cooling, and partial reinjection- can address challenges simultaneously with several synergistic advantages. The paper also considers the related potential impacts of pore space competition (with future groundwater use, gas storage and shale gas) on CCS expansion. Freshwater and brine must become key decision making inputs throughout CCS operations, building on existing successful industrial-scale integrations.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97202011-01-01T00:00:00ZAn efficient software framework for performing industrial risk assessment of leakage for geological storage of CO2
http://hdl.handle.net/1956/9719
An efficient software framework for performing industrial risk assessment of leakage for geological storage of CO2
Dobossy, Mark E.; Celia, Michael A.; Nordbotten, Jan Martin
Journal article
In response to anthropogenic CO2 emissions, geological storage has emerged as a practical and scalable bridge technology while renewables and other environmentally friendly energy production methods mature. While an attractive solution, geological storage of CO2 has inherent risk. Two primary concerns are recognized: (1) leakage of CO2 through caprock imperfections, and (2) brine displacement resulting in contamination of drinking water sources. Three mechanisms for both CO2 and brine leakage have been identified: diffuse leakage through the caprock, leakage through faults and fractures in the caprock, and finally, leakage through man-made pathways such as abandoned wells from oil and gas exploration. While the first two leakage mechanisms are important, we emphasize the risks associated with the presence of abandoned wells. This is due to the large number and density of wells from a history of oil and gas exploration around the world, and the high degree of uncertainty surrounding the properties of these abandoned wells. With current proposed legislation in both the United States and Europe, a need is emerging for practical assessment of leakage risk. In order to accurately predict leakage of brine and CO2 from the injection layer, the geological information for the injection site and the location and makeup of the man-made leakage pathways previously alluded to must be taken into account. Unfortunately, both the geology and abandoned well metadata are typically high in uncertainty, which must be accounted for. With such a high number of random variables, the current state of the art is running many realizations of a system, using a Monte Carlo approach. This requires that the underlying solution algorithms be accurate, and efficient. In the past, many researchers in both academia and industry have turned to robust numerical analysis packages used in the oil industry. However, due to the large range of scales important to this problem (domains of tens of kilometers on a side affected by leakage pathways with diameters of tens of centimeters) such modeling techniques become computationally expensive for all but the most basic analysis. A computational model developed at Princeton University, and currently being commercialized by Geological Storage Consultants, LLC has been shown to be efficient with sufficient accuracy to allow for comprehensive risk assessment of CO2 injection projects. The model allows for mixing solution methods- using computationally expensive algorithms for formations of greater importance (e.g.- the injection formation) and more efficient, simplified algorithms in other areas of the domain. This ability to arbitrarily mix solution methods offers significant flexibility in the design and execution of models. This paper addresses the framework and algorithms used, and illustrates the importance of efficiency and parallelism using the case study of an injection site in Alberta, Canada. We show how the framework can be used for project planning, for risk mitigation (insurance), and for regulatory groups. Finally, the importance of flexible analysis tools that allow for efficient and effective management of computational resources is discussed.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97192011-01-01T00:00:00ZDetecting leakage of brine or CO2 through abandoned wells in a geological sequestration operation using pressure monitoring wells
http://hdl.handle.net/1956/9718
Detecting leakage of brine or CO2 through abandoned wells in a geological sequestration operation using pressure monitoring wells
Nogues, Juan P.; Nordbotten, Jan Martin; Celia, Michael A.
Journal article
For risk assessment, policy design and GHG emission accounting it is extremely important to know if any CO2 or brine has leaked from a geological sequestration (GS) operation. As such, it is important to understand if it is possible to use certain technologies to detect it. This detection of leakage is one of the most challenging problems associated with GS due to the high uncertainty in the nature and location of leakage pathways. In North America for example millions of legacy oil and gas wells present the possibility of CO2 and brine to leak out of the injection formation. The available information for these potential leaky wells is very limited and the main parameters that control leakage, like permeability of the sealing material are not known. Here we propose to explore the possibility of detecting such leakage by the use of pressure-monitoring wells located in a formation overlying the injection formation. The detection analysis is based on a system of equations that solve for the propagation of a pressure pulse using the superposition principle and an approximation to the well function. We explore the questions of what can be gained by using pressure-monitoring wells and what are the limitations given a specific accuracy threshold of the measuring device. We also try to answer the question of where these monitoring wells should be placed to optimize the objective of a monitoring scheme. We believe these results can ultimately lead to practical design strategies for monitoring schemes, including quantitative estimation of increased probability of leak detection per added observation well.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97182011-01-01T00:00:00ZThe impact of local-scale processes on large-scale CO2 migration and immobilization
http://hdl.handle.net/1956/9717
The impact of local-scale processes on large-scale CO2 migration and immobilization
Gasda, Sarah Eileen; Nordbotten, Jan Martin; Celia, Michael A.
Journal article
<p>Storage security of injected carbon dioxide (CO2) is an essential component of risk management for geological carbon sequestration operations. During the injection and early post-injection periods, CO2 leakage may occur along faults and leaky wells, but this risk may be partly managed by proper site selection and sensible deployment of monitoring and remediation technologies. On the other hand, long-term storage security is an entirely different risk management problem—one that is dominated by a mobile CO2 plume that may travel over very large spatial distances, over long time periods, before it is trapped by a variety of different physical and chemical processes. In the post-injection phase, the mobile CO2 plume migrates in large part due to buoyancy forces, following the natural topography of the geological formation. The primary trapping mechanisms are capillary and solubility trapping, which evolve over thousands to tens of thousands of years and can immobilize a significant portion of the mobile, free-phase CO2 plume. However, both the migration and trapping processes are inherently complex, involving a combination of small and large spatial scales and acting over a range of time scales. Solubility trapping is a prime example of this complexity, where small-scale density instabilities in the dissolved CO2 region leads to convective mixing that has that has a significant effect on the large-scale dissolution process over very long time scales. Another example is the effect of capillary forces on the evolution of mobile CO2, an often-neglected process except with regard to residual trapping. As the plume migrates due to buoyancy and viscous forces, local capillary effects acting at the CO2-brine interface lead to a transition zone where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Using appropriate models that can capture both large and small-scale effects is essential for understanding the role of these processes on the long-term storage security of CO2 sequestration operations.</p>
<p>There are several approaches to modeling long-term CO2 trapping mechanisms. One modeling option is the use of traditional numerical methods, which are often highly sophisticated models that can handle multiple complex phenomena with high levels of accuracy. However, these complex models quickly become prohibitively expensive for the type of large-scale, long-term modeling that is necessary for risk assessment applications such as the late post-injection period. We present an alternative modeling option that combines vertically-averaged governing equations with an upscaled representation of the dissolution-convective mixing process and the local capillary transition zone at the CO2-brine interface. CO2 injection is solved numerically on a coarse grid, capturing the large-scale injection problem and the post-injection capillary trapping, while the upscaled dissolution and capillary fringe models capture these subscale effects and eliminate the need for expensive grid refinement to capture the subscale instabilities associated with convective mixing or the details of the capillary transition zone. With this modeling approach, we demonstrate the effect of different modeling choices associated with dissolution and capillary processes for typical large-scale geological systems.</p>
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97172011-01-01T00:00:00ZA 3D computational study of effective medium methods applied to fractured media
http://hdl.handle.net/1956/9705
A 3D computational study of effective medium methods applied to fractured media
Sævik, Pål Næverlid; Berre, Inga; Jakobsen, Morten; Lien, Martha
Journal article
This work evaluates and improves upon existing effective medium methods for permeability upscaling in fractured media. Specifically, we are concerned with the asymmetric self-consistent, symmetric self-consistent, and differential methods. In effective medium theory, inhomogeneity is modeled as ellipsoidal inclusions embedded in the rock matrix. Fractured media correspond to the limiting case of flat ellipsoids, for which we derive a novel set of simplified formulas. The new formulas have improved numerical stability properties, and require a smaller number of input parameters. To assess their accuracy, we compare the analytical permeability predictions with three-dimensional finite-element simulations. We also compare the results with a semi-analytical method based on percolation theory and curve-fitting, which represents an alternative upscaling approach. A large number of cases is considered, with varying fracture aperture, density, matrix/fracture permeability contrast, orientation, shape, and number of fracture sets. The differential method is seen to be the best choice for sealed fractures and thin open fractures. For high-permeable, connected fractures, the semi-analytical method provides the best fit to the numerical data, whereas the differential method breaks down. The two self-consistent methods can be used for both unconnected and connected fractures, although the asymmetric method is somewhat unreliable for sealed fractures. For open fractures, the symmetric method is generally the more accurate for moderate fracture densities, but only the asymmetric method is seen to have correct asymptotic behavior. The asymmetric method is also surprisingly accurate at predicting percolation thresholds.
Tue, 01 Oct 2013 00:00:00 GMThttp://hdl.handle.net/1956/97052013-10-01T00:00:00ZDetermining individual variation in growth and its implication for life-history and population processes using the Empirical Bayes method
http://hdl.handle.net/1956/9682
Determining individual variation in growth and its implication for life-history and population processes using the Empirical Bayes method
Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J.; Munch, Stephan; Skaug, Hans J.
Journal article
The differences in demographic and life-history processes between organisms living in the same population have important
consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the
investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We
use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy
growth model with random effects. To illustrate the power and generality of the method, we consider two populations of
marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more
than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as
potential predictors of the von Bertalanffy growth function’s parameters k (rate of growth) and L∞ (asymptotic size). Our
results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to
the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as
the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both
populations, models including density during the first year of life showed that growth tended to decrease with increasing
population density early in life. Model validation showed that predictions of individual growth trajectories using the
random-effects model were more accurate than predictions based on mean size-at-age of fish.
Thu, 11 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/96822014-09-11T00:00:00ZInvestigating population genetic structure in a highly mobile marine organism: The minke whale Balaenoptera acutorostrata acutorostrata in the North East Atlantic
http://hdl.handle.net/1956/9675
Investigating population genetic structure in a highly mobile marine organism: The minke whale Balaenoptera acutorostrata acutorostrata in the North East Atlantic
Sanchez, Maria Quintela; Skaug, Hans J.; Øien, Nils Inge; Haug, Tore; Seliussen, Bjørghild Breistein; Solvang, Hiroko Kato; Pampoulie, Christophe; Kanda, Naohisa; Pastene, Luis A.; Glover, Kevin
Journal article
Inferring the number of genetically distinct populations and their levels of connectivity is of key importance for the
sustainable management and conservation of wildlife. This represents an extra challenge in the marine environment where
there are few physical barriers to gene-flow, and populations may overlap in time and space. Several studies have
investigated the population genetic structure within the North Atlantic minke whale with contrasting results. In order to
address this issue, we analyzed ten microsatellite loci and 331 bp of the mitochondrial D-loop on 2990 whales sampled in
the North East Atlantic in the period 2004 and 2007–2011. The primary findings were: (1) No spatial or temporal genetic
differentiations were observed for either class of genetic marker. (2) mtDNA identified three distinct mitochondrial lineages
without any underlying geographical pattern. (3) Nuclear markers showed evidence of a single panmictic population in the
NE Atlantic according STRUCTURE’s highest average likelihood found at K = 1. (4) When K = 2 was accepted, based on the
Evanno’s test, whales were divided into two more or less equally sized groups that showed significant genetic
differentiation between them but without any sign of underlying geographic pattern. However, mtDNA for these
individuals did not corroborate the differentiation. (5) In order to further evaluate the potential for cryptic structuring, a set
of 100 in silico generated panmictic populations was examined using the same procedures as above showing genetic
differentiation between two artificially divided groups, similar to the aforementioned observations. This demonstrates that
clustering methods may spuriously reveal cryptic genetic structure. Based upon these data, we find no evidence to support
the existence of spatial or cryptic population genetic structure of minke whales within the NE Atlantic. However, in order to
conclusively evaluate population structure within this highly mobile species, more markers will be required.
Tue, 30 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/96752014-09-30T00:00:00ZNumerical solutions of two-phase flow with applications to CO2 sequestration and polymer flooding
http://hdl.handle.net/1956/9666
Numerical solutions of two-phase flow with applications to CO2 sequestration and polymer flooding
Mykkeltvedt, Trine Solberg
Doctoral thesis
<p>This thesis addresses challenges related to mathematical and numerical modeling of flow in porous media. To address these challenges, two applications are considered: firstly, counter-current two-phase flow in a heterogeneous porous media and secondly, polymer flooding in the context of enhanced oil recovery. Furthermore, an upscaled model for CO2 migration is used to estimate effective rates of convective mixing from commercial-scale injection.</p>
<p>Numerically, the upstream mobility scheme is widely used to solve hyperbolic conservation laws. For flow in heterogeneous porous media there exists no convergence analysis for this scheme. Studies of the convergence performance of this scheme are important due to the extensive use of the upstream mobility scheme in the reservoir simulation community. We show that the upstream mobility scheme may exhibit large errors compared to the physically relevant solution when applied to a counter-current flow in a reservoir where discontinuities in the flux function are introduced through the permeability. A small perturbation of the relative permeability values can lead to a large difference in the solution produced by the upstream mobility scheme. Not only does the scheme encounter large errors compared to what is considered to be the physically relevant solution, but the solution also lacks entropy consistency.</p>
<p>High-resolution schemes are often used for model problems where high accuracy is required in the presence of shocks or discontinuities. Polymer flooding represents such a system and is a difficult process to model, especially since the dynamics of the flow lead to concentration fronts that are not self-sharpening. The application of modern high-resolution schemes to a system that models polymer flooding is considered and different first- and higher-order schemes are compared in terms of how the discontinuities are treated. Through numerous numerical experiments some special numerical artifacts of the polymer system are uncovered. The need of high-resolution schemes and the importance of their applicability for the polymer problem is addressed.</p>
<p>The process of CO2 migration ranges over multiple scales and results in challenges when it comes to modeling and simulation of this system. This expresses the need for a upscaled model and upscaled parameters that can capture both large and smallscale spatial and temporal effects. The ongoing CO2-injection at the Utsira formation is considered as a field-scale study for CO2 storage. Through an upscaled model for CO2 migration we get the first field-scale estimates of the effective upscaled convective mixing rates in this context. The findings are comparable but somewhat higher than reported in the existing literature based on fine-scale numerical simulations. Our work validates the use of numerical simulations to obtain upscaled convective mixing rates, while at the same time validating that convective mixing is an important quantifiable storage mechanism at the Utsira formation. To account for uncertainties in the description of the storage formation, sensitivity studies are conducted relative to some of the most uncertain parameters.</p>
Fri, 19 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/96662014-12-19T00:00:00ZDo abnormal serum lipid levels increase the risk of chronic low back pain? The Nord-Trøndelag Health Study
http://hdl.handle.net/1956/9614
Do abnormal serum lipid levels increase the risk of chronic low back pain? The Nord-Trøndelag Health Study
Heuch, Ingrid; Heuch, Ivar; Hagen, Knut; Zwart, John-Anker
Journal article
Background: Cross-sectional studies suggest associations between abnormal lipid levels and prevalence of low back pain (LBP), but it is not known if there is any causal relationship.
Objective: The objective was to determine, in a population-based prospective cohort study, whether there is any relation between levels of total cholesterol, high density lipoprotein (HDL) cholesterol and triglycerides and the probability of experiencing subsequent chronic (LBP), both among individuals with and without LBP at baseline.
Methods: Information was collected in the community-based HUNT 2 (1995–1997) and HUNT 3 (2006–2008) surveys of an entire Norwegian county. Participants were 10,151 women and 8731 men aged 30–69 years, not affected by chronic LBP at baseline, and 3902 women and 2666 men with LBP at baseline. Eleven years later the participants indicated whether they currently suffered from chronic LBP.
Results: Among women without LBP at baseline, HDL cholesterol levels were inversely associated and triglyceride levels positively associated with the risk of chronic LBP at end of follow-up in analyses adjusted for age only. Adjustment for the baseline factors education, work status, physical activity, smoking, blood pressure and in particular BMI largely removed these associations (RR: 0.96, 95% CI: 0.85–1.07 per mmol/l of HDL cholesterol; RR: 1.16, 95% CI: 0.94–1.42 per unit of lg(triglycerides)). Total cholesterol levels showed no associations. In women with LBP at baseline and men without LBP at baseline weaker relationships were observed. In men with LBP at baseline, an inverse association with HDL cholesterol remained after complete adjustment (RR: 0.83, 95% CI: 0.72–0.95 per mmol/l).
Conclusion: Crude associations between lipid levels and risk of subsequent LBP in individuals without current LBP are mainly caused by confounding with body mass. However, an association with low HDL levels may still remain in men who are already affected and possibly experience a higher pain intensity.
Thu, 18 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/96142014-09-18T00:00:00ZIdentification of subsurface structures using electromagnetic data and shape priors
http://hdl.handle.net/1956/9612
Identification of subsurface structures using electromagnetic data and shape priors
Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond
Journal article
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source
electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity
can be small, regularization is needed to preserve structural information. We propose to combine
two approaches for regularization of the inverse problem. In the first approach we utilize a model-based,
reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate
number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved
using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained
using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity
representation is suitable for incorporation of structural prior information. Such prior information can,
however, not be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the
structural information using shape priors. The shape prior technique requires the choice of kernel function,
which is application dependent. We argue for using the conditionally positive definite kernel which is shown
to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical
experiments on various test cases show that the methodology is able to identify fairly complex subsurface
electric conductivity distributions while preserving structural prior information during the inversion.
Sun, 01 Mar 2015 00:00:00 GMThttp://hdl.handle.net/1956/96122015-03-01T00:00:00ZInversion of CSEM Data for Subsurface Structure Identification and Numerical Assessment of the Upstream Mobility Scheme
http://hdl.handle.net/1956/9611
Inversion of CSEM Data for Subsurface Structure Identification and Numerical Assessment of the Upstream Mobility Scheme
Tveit, Svenn
Doctoral thesis
<p>Part I</p>
<p>In this part of the thesis, two different methodologies for solving the inverse problem of mapping the subsurface electric conductivity distribution using controlled source electromagnetic (CSEM) data are presented. The two inversion methodologies are based on a classical and a Bayesian approach for solving inverse problems, respectively.</p>
<p>In the classical approach, we regularize the inverse problem by incorporating structural prior information available from, e.g., interpreted seismic data. In many cases, the outcome of an interpretation of seismic data cannot be well approximated by a Gaussian distribution. Hence, to incorporate non-Gaussian prior information we have applied the shape prior technique. Here, an implicit transformation of variables facilitates the incorporation of non-Gaussian prior information, at the expense of an applicationdependent kernel function.</p>
<p>In the Bayesian approach, a combination of prior knowledge and observed data results in a solution given as a posterior probability density function (PDF). To sample from the posterior PDF, a sequential Bayesian method, the ensemble Kalman filter (EnKF), is applied. Structural prior information is naturally incorporated as a part of the Bayesian framework.</p>
<p>To represent large-scale subsurface structures two model-based, composite parameterizations based on the level-set representation are applied in the inversion methodologies. By using a reduced number of parameters in the representation, a regularization of the inverse problem is achieved. Moreover, it enables the use of second-order gradientbased optimization algorithms in the classical approach.</p>
<p>Part II</p>
<p>In this part of the thesis, a numerical investigation of the upstream mobility scheme for calculating fluid flow in porous media is presented. Previous studies have shown that the upstream mobility scheme experienced erroneous behaviour when approximating pure gravity segregation flow in 1D heterogeneous porous media. The errors shown, however, were small in magnitude. In this work, numerical experiments, where we include both advection and gravity segregation, are conducted. It is shown that the errors produced in this case may be larger in magnitude than for pure gravity segregation, but are only found for countercurrent flow situations.</p>
Fri, 27 Mar 2015 00:00:00 GMThttp://hdl.handle.net/1956/96112015-03-27T00:00:00ZInexact linear solvers for control volume discretizations in porous media
http://hdl.handle.net/1956/9588
Inexact linear solvers for control volume discretizations in porous media
Keilegavlen, Eirik; Nordbotten, Jan Martin
Journal article
We discuss the construction ofmulti-level inexact
linear solvers for control volume discretizations for porous
media. The methodology forms a contrast to standard iterative
solvers by utilizing an algebraic hierarchy of approximations
which preserve the conservative structure of the
underlying control volume. Our main result is the generalization
of multiscale control volume methods as multilevel
inexact linear solvers for conservative discretizations
through the design of a particular class of preconditioners.
This construction thereby bridges the gap between
multiscale approximation and linear solvers. The resulting
approximation sequence is referred to as inexact solvers.
We seek a conservative solution, in the sense of controlvolume
discretizations, within a prescribed accuracy. To this
end, we give an abstract guaranteed a posteriori error bound
relating the accuracy of the linear solver to the underlying
discretization. These error bounds are explicitly computable
for the grids considered herein. The afore-mentioned hierarchy
of conservative approximations can also be considered
in the context of multi-level upscaling, and this perspective
is highlighted in the text as appropriate. The new
construction is supported by numerical examples highlighting
the performance of the inexact linear solver realized
in both a multi- and two-level context for two- and threedimensional
heterogeneous problems defined on structured
and unstructured grids. The numerical examples assess the
performance of the approach both as an inexact solver, as
well in comparison to standard algebraic multigrid methods.
Wed, 03 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/95882014-12-03T00:00:00ZCapturing the coupled hydro-mechanical processes occurring during CO2 injection – example from In Salah
http://hdl.handle.net/1956/9489
Capturing the coupled hydro-mechanical processes occurring during CO2 injection – example from In Salah
Bjørnarå, Tore Ingvald; Mathias, Simon A.; Nordbotten, Jan Martin; Park, Joonsang; Bohloli, Bahman
Journal article
At In Salah, CO2 is removed from the production stream of several natural gas fields and re-injected into a deep and relatively thin
saline formation, in three different locations. The observed deformation on the surface above the injection sites have partly been
contributed to expansion and compaction of the storage aquifer, but analysis of field data and measurements from monitoring has
verified that substantial activation of fractures and faults occur. History-matching observed data in numerical models involve
several model iterations at a high computation cost. To address this, a simplified model that captures the key hydro-mechanical
effects, while retaining a reasonable accuracy when applied to realistic field data from In Salah, has been derived and compared to
a fully resolved model. Results from the case study presented here show a significant saving in computational cost (36%) and a
computational speed-up factor of 2.7.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/94892014-01-01T00:00:00ZPseudo H-type algebras and sub-Riemannian cut locus
http://hdl.handle.net/1956/9449
Pseudo H-type algebras and sub-Riemannian cut locus
Autenried, Christian
Doctoral thesis
Fri, 13 Feb 2015 00:00:00 GMThttp://hdl.handle.net/1956/94492015-02-13T00:00:00ZOn weak solutions and a covergent numerical scheme for the compressible Navier-Strokes Equations
http://hdl.handle.net/1956/9442
On weak solutions and a covergent numerical scheme for the compressible Navier-Strokes Equations
Svärd, Magnus
Journal article
In this paper, the three-dimensional compressible Navier-Stokes
equations are considered on a periodic domain. We propose a semi-discrete
numerical scheme and derive a priori bounds that ensures that the resulting
system of ordinary di erential equations is solvable for any h > 0. An a
posteriori examination that density remain uniformly bounded away from 0
will establish that a subsequence of the numerical solutions converges to a
weak solution of the compressible Navier-Stokes equations.
Thu, 26 Feb 2015 00:00:00 GMThttp://hdl.handle.net/1956/94422015-02-26T00:00:00ZAssessment of Sequential and Simultaneous Ensemble-based History Matching Methods for Weakly Non-linear Problems
http://hdl.handle.net/1956/9398
Assessment of Sequential and Simultaneous Ensemble-based History Matching Methods for Weakly Non-linear Problems
Fossum, Kristian
Doctoral thesis
<p>The ensemble Kalman filter (EnKF) has, since its introduction in 1994, gained much attention as a tool for sequential data assimilation in many scientific areas. In recent years, the EnKF has been utilized for estimating the poorly known petrophysical parameters in petroleum reservoir models. The ensemble based methodology has inspired several related methods, utilized both in data assimilation and for parameter estimation. All these methods, including the EnKF, can be shown to converge to the correct solution in the case of a Gaussian prior model, Gaussian data error, and linear model dynamics. However, for many problems, where the methods are applied, this is not satisfied. Moreover, several numerical studies have shown that, for such cases, the different methods have different approximation error.</p>
<p>Considering parameter estimation for problems where the model depends on the parameters in a non-linear fashion, this thesis explore the similarities and differences between the EnKF and the alternative methods. Several characteristics are established, and it is shown that each method represents a specific combination of these characteristics. By numerical comparison, it is further shown that a variation of the characteristics produce a variation of the approximation error.</p>
<p>A special emphasis is put on the effect of one characteristic, whether data are assimilated sequentially or simultaneously. Typically, several data types are utilized in the parameter estimation problem. In this thesis, we assume that each data depends on the parameters in a specific non-linear fashion. Considering the assimilation of two weakly non-linear data with different degree of non-linearity, we show, through analytical studies, that the difference between sequential and simultaneous assimilation depends on the combination of data.</p>
<p>Via numerical modelling, we investigate the difference between sequential and simultaneous assimilation on toy models and simplified reservoir problems. Utilizing realistic reservoir data, we show that the assumption of difference in non-linearity for different data holds. Moreover, we demonstrate that, for favourable degree of nonlinearity, it is beneficial to assimilate the data ordered after ascending degree of nonlinearity.</p>
Fri, 13 Feb 2015 00:00:00 GMThttp://hdl.handle.net/1956/93982015-02-13T00:00:00ZDynamic Capillary Effects in the Simulation of Flow and Transport in Porous Media: A New Linearisation Method
http://hdl.handle.net/1956/9374
Dynamic Capillary Effects in the Simulation of Flow and Transport in Porous Media: A New Linearisation Method
Teveldal, Silje Kjønaas
Master thesis
In this thesis mathematical models with and without dynamic capillary effects are developed to model water flow and solute transport through a porous medium. The system of equations are discretised using the finite volume method TPFA in space and the backward Euler method in time. To solve the nonlinear systems appearing at each time step numerically, robust linearisation methods are proposed. These methods do not involve the computation of derivatives. The methods are analysed and have been shown to be linearly convergent and robust. Moreover, the convergence was shown to be independent of mesh size. The influence that the dynamic effects have on flow and transport is studied numerically. Additional numerical experiments were conducted to study the convergence of the linearisation schemes. The numerical results are shown to be in correspondence with the theoretical results.
Thu, 25 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/93742014-12-25T00:00:00ZOn the extension giving the truncated Witt vectors
http://hdl.handle.net/1956/9347
On the extension giving the truncated Witt vectors
Skjøtskift, Torgeir
Master thesis
We explore the theory of cohomology of groups and
the classification of group extensions with abelian
kernel. We then look at the group extensions that underlie
the truncated Witt vectors on the truncation set {1,p}
where p is a prime number. It turns out that we
can do without the multiplicative structure on the
source ring A by factoring the extension's representing
cocycle through a map into the p-fold tensor
product of A divided out by the C_p-action.
Mon, 05 Jan 2015 00:00:00 GMThttp://hdl.handle.net/1956/93472015-01-05T00:00:00ZTensor Induction as Left Kan Extension
http://hdl.handle.net/1956/9319
Tensor Induction as Left Kan Extension
Aye, Kaythi
Master thesis
Tue, 02 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/93192014-12-02T00:00:00ZWeak solutions and convergent numerical schemes of Brenner-Navier-Stokes equations
http://hdl.handle.net/1956/9230
Weak solutions and convergent numerical schemes of Brenner-Navier-Stokes equations
Svärd, Magnus
Research report
Lately, there has been some interest in modifications of the compressible
Navier-Stokes equations to include diffusion of mass. In this paper,
we investigate possible ways to add mass diffusion to the 1-D Navier-Stokes
equations without violating the basic entropy inequality. As a result, we recover
a general form of Brenner's modification of the Navier-Stokes equations.
We consider Brenner's system along with another modification where the viscous
terms collapse to a Laplacian diffusion. For each of the two modifications,
we derive a priori estimates for the PDE, suffciently strong to admit a weak
solution; we propose a numerical scheme and demonstrate that it satisfies the
same a priori estimates. For both modifications, we then demonstrate that the
numerical schemes generate solutions that converge to a weak solution (up to
a subsequence) as the grid is refined.
Tue, 20 Jan 2015 00:00:00 GMThttp://hdl.handle.net/1956/92302015-01-20T00:00:00ZModeling of oil reservoirs with focus on microbial induced effects
http://hdl.handle.net/1956/9207
Modeling of oil reservoirs with focus on microbial induced effects
Babatunde, Stanley Owulabi
Master thesis
As an abstract to this thesis, we review some literatures in EOR and discussed the processes, strength and weakness of Microbial Enhanced Oil Recovery techniques. A two phase flow model
comprising water and oil via the concept of mean pressure has been formulated using mass conservation equations, Darcy's law and constitutive relations. This resulted in a set of coupled nonlinear parabolic partial
differential equation with primary variables being the mean pressure and water saturation. We discretized these equations in one dimension using a control volume discretization scheme in space and implicit Euler
in time. We employed the IMPES approach which decoupled the primary variables. A model validation test was made by comparison with an analytical solution and with the Couplex-Gas benchmark. The model was used to
investigate two major mechanisms by which the activities of bacterial helps in enhancing the recovery of the residual oil.
Sat, 01 Nov 2014 00:00:00 GMThttp://hdl.handle.net/1956/92072014-11-01T00:00:00ZMathematical modelling of slow drug release from collagen matrices
http://hdl.handle.net/1956/9181
Mathematical modelling of slow drug release from collagen matrices
Erichsen, Birgitte Riisøen
Master thesis
This master's thesis is about controlled drug release, which is a relatively new area of mathematical modelling. In this thesis there have been two major focuses. The first is to further understand the model for drug release from collagen matrices developed earlier by solving it with a different numerical scheme, and the second to develop a new model based on a different geometry. Both models are based on mass conservation and Fick's law, and are therefore possible to compare. The two models have been discretized and implemented, and the results compared to experimental data.
Tue, 23 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/91812014-09-23T00:00:00ZCell-centered finite volume discretizations for deformable porous media
http://hdl.handle.net/1956/9014
Cell-centered finite volume discretizations for deformable porous media
Nordbotten, Jan Martin
Journal article
The development of cell-centered finite volume discretizations for deformation is motivated by the desire for a compatible approach with the discretization of fluid flow in deformable porous media. We express the conservation of momentum in the finite volume sense, and introduce three approximations methods for the cell-face stresses. The discretization method is developed for general grids in one to three spatial dimensions, and leads to a global discrete system of equations for the displacement vector in each cell, after which the stresses are calculated based on a local expression. The method allows for anisotropic, heterogeneous and discontinuous coefficients.
The novel finite volume discretization is justified through numerical validation tests, designed to investigate classical challenges in discretization of mechanical equations. In particular our examples explore the stability with respect to the Poisson ratio and spatial discontinuities in the material parameters. For applications, logically Cartesian grids are prevailing, and we also explore the performance on perturbations on such grids, as well as on unstructured grids. For reference, comparison is made in all cases with the lowest-order Lagrangian finite elements, and the finite volume methods proposed herein is comparable for approximating displacement, and is superior for approximating stresses.
Mon, 11 Aug 2014 00:00:00 GMThttp://hdl.handle.net/1956/90142014-08-11T00:00:00ZExplicit volume-preserving splitting methods for divergence-free ODEs by tensor-product basis decompositions
http://hdl.handle.net/1956/9006
Explicit volume-preserving splitting methods for divergence-free ODEs by tensor-product basis decompositions
Munthe-Kaas, Antonella Zanna
Journal article
We discuss the construction of volume-preserving splitting methods based on a tensor product of single-variable basis functions. The vector field is decomposed as the sum of elementary divergence-free vector fields (EDFVFs), each of them corresponding to a basis function. The theory is a generalization of the monomial basis approach introduced in Xue & Zanna (2013, BIT Numer. Math., 53, 265–281) and has the trigonometric splitting of Quispel & McLaren (2003, J. Comp. Phys., 186, 308–316) and the splitting in shears of McLachlan & Quispel (2004, BIT, 44, 515–538) as special cases. We introduce the concept of diagonalizable EDFVFs and identify the solvable ones as those corresponding to the monomial basis and the exponential basis. In addition to giving a unifying view of some types of volume-preserving splitting methods already known in the literature, the present approach allows us to give a closed-form solution also to other types of vector fields that could not be treated before, namely those corresponding to the mixed tensor product of monomial and exponential (including trigonometric) basis functions.
Sun, 23 Feb 2014 00:00:00 GMThttp://hdl.handle.net/1956/90062014-02-23T00:00:00ZSymmetric spaces and Lie triple systems in numerical analysis of differential equations
http://hdl.handle.net/1956/9002
Symmetric spaces and Lie triple systems in numerical analysis of differential equations
Munthe-Kaas, Hans; Quispel, Reinout; Zanna, Antonella
Journal article
A remarkable number of different numerical algorithms can be understood
and analyzed using the concepts of symmetric spaces and Lie triple systems, which
are well known in differential geometry from the study of spaces of constant curvature
and their tangents. This theory can be used to unify a range of different topics, such
as polar-type matrix decompositions, splitting methods for computation of the matrix
exponential, composition of selfadjoint numerical integrators and dynamical systems
with symmetries and reversing symmetries. The thread of this paper is the following:
involutive automorphisms on groups induce a factorization at a group level, and a
splitting at the algebra level. In this paper we will give an introduction to the mathematical
theory behind these constructions, and review recent results. Furthermore,
we present a new Yoshida-like technique, for self-adjoint numerical schemes, that
allows to increase the order of preservation of symmetries by two units. The proposed
techniques has the property that all the time-steps are positive.
Sat, 01 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/90022014-03-01T00:00:00ZOn the Formulation of Mass, Momentum and Energy Conservation in the KdV Equation
http://hdl.handle.net/1956/8968
On the Formulation of Mass, Momentum and Energy Conservation in the KdV Equation
Ali, Alfatih Mohammed A.; Kalisch, Henrik
Journal article
The Korteweg-de Vries (KdV) equation is widely recognized as a simple model
for unidirectional weakly nonlinear dispersive waves on the surface of a shallow body of
fluid. While solutions of the KdV equation describe the shape of the free surface, information
about the underlying fluid flow is encoded into the derivation of the equation, and the
present article focuses on the formulation of mass, momentum and energy balance laws in
the context of the KdV approximation. The densities and the associated fluxes appearing in
these balance laws are given in terms of the principal unknown variable η representing the
deflection of the free surface from rest position. The formulae are validated by comparison
with previous work on the steady KdV equation. In particular, the mass flux, total head and
momentum flux in the current context are compared to the quantities Q, R and S used in
the work of Benjamin and Lighthill (Proc. R. Soc. Lond. A 224:448–460, 1954) on cnoidal
waves and undular bores.
Wed, 01 Oct 2014 00:00:00 GMThttp://hdl.handle.net/1956/89682014-10-01T00:00:00ZElectrical conductivity of fractured media: A computational study of the self-consistent method
http://hdl.handle.net/1956/8923
Electrical conductivity of fractured media: A computational study of the self-consistent method
Sævik, Pål Næverlid; Berre, Inga; Jakobsen, Morten; Lien, Martha
Conference object
Effective medium theory can be used to link conductivity estimation
methods with prior knowledge about the distribution
of fractures in the investigated geological structure. In the literature,
little work has been presented on assessing the accuracy
of effective medium approximations for dense networks
of finite-sized fractures. We present here a systematic computational
study, comparing the conductivity predictions of
the popular self-consistent method with results from numerical
finite-element simulations. Our results show that the selfconsistent
method is accurate within acceptable error bounds
for a range of parameter values, in some cases even beyond
the percolation limit. We also compare the percolation thresholds
predicted by self-consistent theory with the thresholds obtained
by a numerical percolation algorithm. For the cases we
have studied, the percolation thresholds agree to a remarkable
degree.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/89232012-01-01T00:00:00ZFinite volume hydromechanical simulation in porous media
http://hdl.handle.net/1956/8911
Finite volume hydromechanical simulation in porous media
Nordbotten, Jan Martin
Journal article
Cell-centered finite volume methods are prevailing in numerical simulation of flow in porous media. However, due to the lack of cell-centered finite volume methods for mechanics, coupled flow and deformation is usually treated either by coupled finite-volume-finite element discretizations, or within a finite element setting. The former approach is unfavorable as it introduces two separate grid structures, while the latter approach loses the advantages of finite volume methods for the flow equation. Recently, we proposed a cell-centered finite volume method for elasticity. Herein, we explore the applicability of this novel method to provide a compatible finite volume discretization for coupled hydromechanic flows in porous media. We detail in particular the issue of coupling terms, and show how this is naturally handled. Furthermore, we observe how the cell-centered finite volume framework naturally allows for modeling fractured and fracturing porous media through internal boundary conditions. We support the discussion with a set of numerical examples: the convergence properties of the coupled scheme are first investigated; second, we illustrate the practical applicability of the method both for fractured and heterogeneous media.
Tue, 27 May 2014 00:00:00 GMThttp://hdl.handle.net/1956/89112014-05-27T00:00:00ZControllability on Infinite-Dimensional Manifolds: A Chow-Rashevsky Theorem
http://hdl.handle.net/1956/8886
Controllability on Infinite-Dimensional Manifolds: A Chow-Rashevsky Theorem
Khajeh Salehani, Mahdi; Markina, Irina
Journal article
One of the fundamental problems in control theory is that of controllability, the question of whether one can drive the system from one point to another with a given class of controls. A classical result in geometric control theory of finite-dimensional (nonlinear) systems is Chow–Rashevsky theorem that gives a sufficient condition for controllability on any connected manifold of finite dimension. In other words, the classical Chow–Rashevsky theorem, which is in fact a primary theorem in subriemannian geometry, gives a global connectivity property of a subriemannian manifold. In this paper, following the unified approach of Kriegl and Michor (The Convenient Setting of Global Analysis, Mathematical Surveys and Monographs, vol. 53, Am. Math. Soc., Providence, 1997) for a treatment of global analysis on a class of locally convex spaces known as convenient, we give a generalization of Chow–Rashevsky theorem for control systems in regular connected manifolds modelled on convenient (infinite-dimensional) locally convex spaces which are not necessarily normable. To indicate an application of our approach to the infinite-dimensional geometric control problems, we conclude the paper with a novel controllability result on the group of orientation-preserving diffeomorphisms of the unit circle.
Tue, 29 Apr 2014 00:00:00 GMThttp://hdl.handle.net/1956/88862014-04-29T00:00:00ZMethods and Tools for Analysis of Symmetric Cryptographic Primitives
http://hdl.handle.net/1956/8828
Methods and Tools for Analysis of Symmetric Cryptographic Primitives
Kazymyrov, Oleksandr
Doctoral thesis
<p>The development of modern cryptography is associated with the emergence of computing machines. Since specialized equipment for protection of sensitive information was initially implemented only in hardware, stream ciphers were widespread. Later, other areas of symmetric and asymmetric cryptography were established with the invention of general-purpose processors. In particular, such symmetric cryptographic primitives as block ciphers, message authentication codes (MACs), authenticated ciphers and others began to develop rapidly. Today various cryptographic algorithms are commonly used in everyday life to protect private data.</p>
<p>Design and analysis of advanced symmetric cryptographic primitives require a lot of time and resources. This is related to many factors, mainly to the cryptanalysis of prospective encryption algorithms under development. Every year new and modified attacks are published, leading to a rapid increase in the quantity of requirements and criteria imposed on cryptoprimitives.</p>
<p>Most of this thesis is devoted to analysis and improvement of cryptographic attacks and corresponding criteria for basic components. Almost all modern cryptoprimitives use nonlinear mappings for protection against advanced attacks. In connection with that a new method was proposed for the generation of random substitutions (S-boxes) with extreme cryptographic indicators that can be used in the next-generation ciphers to provide high and ultra-high security levels. In addition, several criteria imposed on S-boxes used in block ciphers were analyzed and their significance for block ciphers was proven. It is worth mentioning a practical method of testing two vectorial Boolean functions and a universal tool for checking properties of arbitrary binary nonlinear components presented in papers gathered in this thesis.</p>
<p>Another part of the thesis is dedicated to the cryptanalysis of hash functions as well as block and stream ciphers. To be more precise, an algebraic attack based on a binary decision diagram (BDD) was performed on the reduced Data Encryption Standard (DES), a scaled-down version of Advanced Encryption Standard (AES) and extended affine (EA) equivalence problem. Moreover, an algebraic approach was used to reconstruct an initial representation of the current Russian hash standard GOST 34.11-2012. Finally, a backward states tree method has been used to analyze stream ciphers based on the combination principle of linear and nonlinear feedback registers.</p>
Mon, 01 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/88282014-12-01T00:00:00ZClassical and Stochastic Slit Löwner Evolution
http://hdl.handle.net/1956/8746
Classical and Stochastic Slit Löwner Evolution
Ivanov, Georgy
Doctoral thesis
Fri, 10 Oct 2014 00:00:00 GMThttp://hdl.handle.net/1956/87462014-10-10T00:00:00ZImproving the error rates of the Begg and Mazumdar test for publication bias in fixed effects meta-analysis
http://hdl.handle.net/1956/8611
Improving the error rates of the Begg and Mazumdar test for publication bias in fixed effects meta-analysis
Gjerdevik, Miriam; Heuch, Ivar
Journal article
<p>Background: The rank correlation test introduced by Begg and Mazumdar is extensively used in meta-analysis to test for publication bias in clinical and epidemiological studies. It is based on correlating the standardized treatment effect with the variance of the treatment effect using Kendall’s tau as the measure of association. To our knowledge, the operational characteristics regarding the significance level of the test have not, however, been fully assessed.</p>
<p>Methods: We propose an alternative rank correlation test to improve the error rates of the original Begg and Mazumdar test. This test is based on the simulated distribution of the estimated measure of association, conditional on sampling variances. Furthermore, Spearman’s rho is suggested as an alternative rank correlation coefficient. The attained level and power of the tests are studied by simulations of meta-analyses assuming the fixed effects model.</p>
<p>Results: The significance levels of the original Begg and Mazumdar test often deviate considerably from the nominal level, the null hypothesis being rejected too infrequently. It is proven mathematically that the assumptions for using the rank correlation test are not strictly satisfied. The pairs of variables fail to be independent, and there is a correlation between the standardized effect sizes and sampling variances under the null hypothesis of no publication bias. In the meta-analysis setting, the adverse consequences of a false negative test are more profound than the disadvantages of a false positive test. Our alternative test improves the error rates in fixed effects meta-analysis. Its significance level equals the nominal value, and the Type II error rate is reduced. In small data sets Spearman’s rho should be preferred to Kendall’s tau as the measure of association.</p>
<p>Conclusions: As the attained significance levels of the test introduced by Begg and Mazumdar often deviate greatly from the nominal level, modified rank correlation tests, improving the error rates, should be preferred when testing for publication bias assuming fixed effects meta-analysis.</p>
Mon, 22 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/86112014-09-22T00:00:00ZA study in Univalence
http://hdl.handle.net/1956/8566
A study in Univalence
Husebø, Anders Knarvik
Master thesis
In this master thesis we want to study the newly discovered homotopy type theory, and its
models within mathematics. We look at models in simplicial sets, simplicial symmetric monoids, and a new category which could be called multi pointed simplicial sets. We also describe dependent type theory from the informatical point of view, and some implications of it.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/85662014-06-02T00:00:00ZPortfolio Optimization with PCC-GARCH-CVaR model
http://hdl.handle.net/1956/8555
Portfolio Optimization with PCC-GARCH-CVaR model
Xi, Linda Mon
Master thesis
This thesis investigates the Conditional Value-at-Risk (CVaR) portfolio optimization approach combined with a univariate GARCH model and pair-copula constructions (PCC) to determine the optimal asset allocation for a portfolio.
The methodology focuses on minimizing CVaR as the risk measure in replacement of variance used in the traditional optimization framework of Markowitz. GARCH model provides a tool for predicting and analyzing the time-varying volatility financial assets are exposed to, while copulas allow us to model the non-linear dependence structure and margins separately.
We compare the performance of the CVaR optimized portfolio with other investment strategies such as Constant-Mix and Buy-and-Hold. Although the selection of strategy depends on the investor risk profile, it is empirically shown that the proposed CVaR optimized portfolio outperforms the other two investment strategies based on the accumulated wealth in the long run.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/85552014-06-02T00:00:00ZInvestigations of the Modified Navier-Stokes Equations in One Dimension
http://hdl.handle.net/1956/8554
Investigations of the Modified Navier-Stokes Equations in One Dimension
Sårheim, Inga Sofie
Master thesis
In this thesis we have investigated the Brenner-Navier-Stoke equations in one spatial dimension. These are a modified version of the Navier-Stokes equations, where the modification relates to a mass diffusion term. This modification would be significant for flows with high density gradients. In this case we will examine a shock wave problem in Argon. Both the original and the modified Navier-Stokes equations will be used to solve the conservation laws. We will study the effect of both entropy-stable and entropy-conservative schemes, in addition to several several different ways to model the diffusion parameters.
The solutions will be analyzed, and compared with experimental results.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/85542014-06-02T00:00:00ZContinuous Max-Flow for Image Segmentation with Shape Priors
http://hdl.handle.net/1956/8367
Continuous Max-Flow for Image Segmentation with Shape Priors
Kvile, Mari Aurlien
Master thesis
In this thesis we propose a stable method for image segmentation with shape priors. The original Chan-Vese intensity based segmentation model with regularisation term is extended to include shape prior information. We study shape priors which are pose invariant under the group of similarity transformations, that is under rotation, scaling and translation. In order to solve this problem robustly and effectively, an algorithm based on the theory of max-flow and min-cut is used in addition to a gradient descent procedure for updating the pose parameters. Comprehensive experiments are provided to demonstrate the behaviour of the proposed method on different images.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83672014-06-02T00:00:00ZFast Image Segmentation Using Variational Optimization Methods With Edge Detector
http://hdl.handle.net/1956/8366
Fast Image Segmentation Using Variational Optimization Methods With Edge Detector
Stanislaus, Mary Gerina
Master thesis
In this work, we apply techniques in variational optimization to image segmentation. We study three different segmentation models: one is based on the active contour method, the second is based on a piecewise constant level set method, and the last uses a continuous max-flow min-cut model. We obtain significantly better segmentation results in the first and the third model by including an experimental edge detector. The first model is a special case of the minimal partition problem, the second model uses discontinuities of piecewise constant level set functions to represent interfaces between the region of interest and the background, and the third model uses a spatially continuous max-flow min-cut framework which is a very efficient method to segment images. The first two models are non-convex and may contain many local solutions, but the last model is a convex optimization problem and therefore finds the global solution.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83662014-06-02T00:00:00ZExact and Superconvergent Solutions of the Multi-Point Flux Approximation O-method: Analysis and Numerical Tests
http://hdl.handle.net/1956/8353
Exact and Superconvergent Solutions of the Multi-Point Flux Approximation O-method: Analysis and Numerical Tests
Olderkjær, Daniel Stensrud
Master thesis
In this thesis we prove the multi-point flux approximation O-method (MPFA)
to yield exact potential and flux for the trigonometric potential functions
u(x,y)=sin(x)sin(y) and u(x,y)=cos(x)cos(y). This is done on uniform square grids
in a homogeneous medium with principal directions of the permeability aligned with
the grid directions when having periodic boundary conditions. Earlier theoretical
and numerical convergence articles suggests that these potential functions should
only yield second order convergence. Hence, our motivation for the analysis was to
gain new insight into the convergence of the method, as well as to develop
theoretical proofs for what seems as decent examples for testing implementation.
An extension of the result to uniform rectangular grids in an isotropic medium is also
briefly discussed, before we develop a numerical overview of the exactness phenomenon for
different types of boundary conditions. Lastly, an investigation of application of these
results to obtain exact potential and flux using the MPFA method for general potential
functions approximated by Fourier series was conducted.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83532014-06-02T00:00:00ZInvestigations of the Kaup-Boussinesq model equations for water waves
http://hdl.handle.net/1956/8335
Investigations of the Kaup-Boussinesq model equations for water waves
Juliussen, Bjørn-Sverre Hjelle
Master thesis
The Kaup-Boussinesq system is a coupled system of nonlinear partial differential equations which has been derived as a model for surface waves in the context of the Boussinesq scaling, and it has also been derived for an internal wave system. In this thesis, modeling properties of the Kaup-Boussinesq water-wave model are under investigation. Differential balance laws for mass, momentum and energy are considered, and we present an exact differential balance for momentum. A Kaup-Boussinesq system describing long internal waves is investigated and compared with the Gardner equation. Finally, a spectral method for the numerical discretization of the Kaup-Boussinesq system for surface waves is put forward, and shown to converge and be stable.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83352014-06-02T00:00:00ZRadielle basisfunksjoner som adaptiv kollokasjonsmetode
http://hdl.handle.net/1956/8333
Radielle basisfunksjoner som adaptiv kollokasjonsmetode
Nicolajsen, Tomas
Master thesis
Hensikten med oppgaven er å undersøke en adaptiv kollokasjonsmetode basert
på radielle basisfunksjoner. Målet er å finne ut om man med enkle kriterier
for senter- og formparameterfordelingen kan oppnå en fordeling som
gjenspeiler egenskaper ved problemet, og om man med det kan oppnå en mer
effektiv og stabil kollokasjonsmetode.
Sun, 01 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83332014-06-01T00:00:00ZBegrepsforståelse i matematikkfaget
http://hdl.handle.net/1956/8332
Begrepsforståelse i matematikkfaget
Vinnes, Eivind Ole
Master thesis
Intensjonen med denne oppgaven er å kunne gi en vurdering av hvilke matematiske basiskunnskaper elevene som begynner på videregående har og gi noen forslag til hva som kan gjøres for å forbedre undervisningen
Fri, 30 May 2014 00:00:00 GMThttp://hdl.handle.net/1956/83322014-05-30T00:00:00ZRekneark på ungdomstrinnet
http://hdl.handle.net/1956/8331
Rekneark på ungdomstrinnet
Johansson, Kari
Master thesis
Problemstillinga i oppgåva var korleis opplever elevane bruk av rekneark i matematikk på ungdomstrinnet. Gjennom eitt år vart ulike reknearkoppgåver prøvd ut i 9./10. klasse. I etterkant henta eg inn data ved hjelp av spørjeskjema, opptaksoppgåver av to og to elevar som løyste utvalde oppgåver og eit gruppeintervju. Eg la vekt på kva fordelar og problem elevane opplevde, og kva ulike typar oppgåver som er relevante for elevane.
Fri, 06 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83312014-06-06T00:00:00ZHybrids between common and Antarctic minke whales are fertile and can back-cross
http://hdl.handle.net/1956/8118
Hybrids between common and Antarctic minke whales are fertile and can back-cross
Glover, Kevin; Kanda, Naohisa; Haug, Tore; Pastene, Luis A.; Øien, Nils Inge; Seliussen, Bjørghild Breistein; Sørvik, Anne Grete Eide; Skaug, Hans J.
Journal article
<p>Background:
Minke whales are separated into two genetically distinct species: the Antarctic minke whale found in
the southern hemisphere, and the common minke whale which is cosmopolitan. The common minke whale is
further divided into three allopatric sub-species found in the North Pacific, southern hemisphere, and the North
Atlantic. Here, we aimed to identify the genetic ancestry of a pregnant female minke whale captured in the North
Atlantic in 2010, and her fetus, using data from the mtDNA control region, 11 microsatellite loci and a sex
determining marker.</p><p>Results:
All statistical parameters demonstrated that the mother was a hybrid displaying maternal and paternal
contribution from North Atlantic common and Antarctic minke whales respectively. Her female fetus displayed
greater genetic similarity to North Atlantic common minke whales than herself, strongly suggesting that the hybrid
mother had paired with a North Atlantic common minke whale.</p><p>Conclusion:
This study clearly demonstrates, for the first time, that hybrids between minke whale species may be
fertile, and that they can back-cross. Whether contact between these species represents a contemporary event
linked with documented recent changes in the Antarctic ecosystem, or has occurred at a low frequency over many
years, remains open.</p>
Mon, 15 Apr 2013 00:00:00 GMThttp://hdl.handle.net/1956/81182013-04-15T00:00:00ZReversible Jump Markov Chain Monte Carlo: Some Theory and Applications
http://hdl.handle.net/1956/8023
Reversible Jump Markov Chain Monte Carlo: Some Theory and Applications
Lyyjynen, Hannu Sakari
Master thesis
The history of MCMC, theories of Bayesian thinking
and model choice, the Accept-
Reject-algorithm, Markov chains, the Metropolis-
Hastings-algorithm and the reversible
jump MCMC are explained. Then the reversible jump
MCMC as change-point analysis
is applied to the coal mine disaster example,
familiar from [Green, 1995], and to the
examples of counting terrorism attacks (worldwide,
in Iraq and in Afghanistan). The
novel part is estimating the change points of the
hazard rate of terrorism attacks in
Afghanistan during the last 35 years.
Mon, 10 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/80232014-03-10T00:00:00ZGeological Storage of CO2: Sensitivity and Risk Analysis
http://hdl.handle.net/1956/7913
Geological Storage of CO2: Sensitivity and Risk Analysis
Ashraf, Meisam
Doctoral thesis
<p>Geological CO2 storage has the potential to be a key technology for prevention of industrial CO2 emission
into the atmosphere. A successful storage operation requires safe geological structures with large
storage capacity. The practicality of the technology is challenged by various operational concerns,
ranging from site selection to long-term monitoring of the injected CO2. The research in this report addresses
the value of using sophisticated geological modeling to help in predicting storage performance.</p><p>In the first part, we investigate the significance of assessing the geological uncertainty and its consequences
in site selection and the early stages of storage operations. This includes the injection period
and the early migration time of the injected CO2 plume. The extensive set of realistic geological realizations
used in the analysis makes the key part of this research. Heterogeneity is modelled using the
most influential geological parameters in a shallow-marine system, including aggradation angle, levels
of barriers in the system, faults, lobosity, and progradation direction.</p><p>A typical injection scenario is simulated over 160 realizations and major flow responses are defined
to measure the success of the early stages of CO2 storage operations. These responses include the
volume of trapped CO2 by capillarity, dynamics of the plume in the medium, pressure responses, and
the risk of leakage through a failure in the sealing cap-rock. The impact of geological uncertainty
on these responses is investigated by comparing all cases for their performance. The results show
large variations in the responses due to changing geological parameters. Among the main influential
parameters are aggradation angle, progradation direction, and faults in the medium.</p><p>A sophisticated geological uncertainty study requires a large number of detailed simulations that
are time-consuming and computationally costly. The second part of the research introduces a workflow
that employs an approximating response surface method called arbitrary polynomial chaos (aPC). The
aPC is fast and sophisticated enough to be used practically in the process of sensitivity analysis and
uncertainty and risk assessment. We demonstrate the workflow by combining the aPC with a global
sensitivity analysis technique, the Sobol indices, which is a variance-based method proven to be practical
for complicated physical problems. Probabilistic uncertainty analysis is performed by applying
the Monte Carlo process using the aPC. The results show that the aPC can be used successfully in an
extensive geological uncertainty study.</p>
Thu, 10 Apr 2014 00:00:00 GMThttp://hdl.handle.net/1956/79132014-04-10T00:00:00ZStatistiske metoder for alder-periode-kohort-analyser. En sammenligning av nyere metoder med konvensjonelle generaliserte lineære modeller
http://hdl.handle.net/1956/7795
Statistiske metoder for alder-periode-kohort-analyser. En sammenligning av nyere metoder med konvensjonelle generaliserte lineære modeller
Askeland, Olaug Margrete
Master thesis
Masteroppgaven tar for seg statistiske metoder for alder-periode-kohort-analyser (APC-analyser). For en gitt respons forsøker APC-analyser å separere den påvirkningen som skyldes alder, fra den påvirkningen som er assosiert med tidsperiode, og den påvirkning som er assosiert med fødselstidspunkt. Den velkjente sammenhengen mellom de tre faktorene, periode - alder = kohort, gjør parameterestimeringen vanskelig, og et generelt dilemma ved APC-analyser er problemstillingen med å separere de simultane effektene. Det har vært foreslått mange løsninger for dette identifikasjonsproblemet og i denne oppgaven sammenlignes nyere metoder mot konvensjonelle generaliserte lineære modeller. Sammenligningene blir gjort med simuleringsanalyser. Det er spesielt tatt utgangspunkt i en nyere metode, Intrinsic Estimator (IE) og sammenlignet denne med en mye benyttet metode, Constrained Generalized Linear Models estimator (CGLIM). CGLIM-metoden er avhengig av at det innføres en betingelse for koeffisientene, og problemet med denne metoden er at den avhenger av forhåndsinformasjon om dataene for å sette disse betingelsene. Estimatene til koeffisientene er sensitiv for valget av denne betingelsen. IE-metoden forsøker å oppnå modellidentifikasjon med minimale antagelser. Videre er det også foretatt en sammenligning med metoder som baserer seg på førsteordensdifferanser og andreordensdifferanser og metoder som inkluderer drift. Ved analyse av enkelte datasett trenger man ikke benytte den fulle APC-modellen, men det holder å benytte metoder som kun inkluderer en eller to av faktorene. Ulike goodness-of-fit-mål er da nyttig for å vurdere om en modell tilpasser et datasett godt nok. IE-metoden er også sammenlignet med slike metoder. En nyere metode som baserer seg på Partial Least Squares for å estimere de simultane effektene til alder, periode og kohort introduseres også.
IE-metoden har vist seg å være en nyttig tilnærming for identifikasjon og estimering i APC-modellen og produserer forventningsrette og effisiente estimater. Den er et sikrere valg når en ikke vet noe om de dataene en skal analysere, for i slike tilfeller kan valg av en vilkårlig betingelse i de andre metodene i verste fall gi estimater som er langt unna sannheten.
Wed, 20 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/77952013-11-20T00:00:00ZOptimization of CO2 Geological Storage Cost
http://hdl.handle.net/1956/7704
Optimization of CO2 Geological Storage Cost
Zhu, Sha
Master thesis
Carbon dioxide capture and storage (CCS) is a promising strategy to battle the climate change by injecting large-scale of carbon dioxide back to underground formations and storing the carbon dioxide possibly permanently. It is an existing technology but for climate and economic concern it is still a relevantly new concept. We are interested in studying the cost of CCS, in particular the cost of CO2 geological storage, and optimize the cost, since the high cost of CCS is a big hurdle for industry to deploy this technology.
Sun, 01 Dec 2013 00:00:00 GMThttp://hdl.handle.net/1956/77042013-12-01T00:00:00ZModeling and simulation of concrete carbonation in 1-d using two-point flux approximation
http://hdl.handle.net/1956/7689
Modeling and simulation of concrete carbonation in 1-d using two-point flux approximation
Røe, Tineke
Master thesis
In this master's thesis we will model concrete carbonation using mass conservation equations and Darcy's law. We then get a set of coupled partial differential equations. We also have a ordinary differential equation modeling porosity change. These equations are discretized using
two-point flux approximation for 1-d in space, and Euler implicit in time. We have a nonlinear pressure equation which is linearized using Newton method. We run simulations, and the results are compared to a constructed analytical solution and to examples in the literature.
Thu, 21 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/76892013-11-21T00:00:00ZNegativ binomisk regresjon med modifiserte sannsynligheter for nullobservasjoner; ZINB og ZANB
http://hdl.handle.net/1956/7686
Negativ binomisk regresjon med modifiserte sannsynligheter for nullobservasjoner; ZINB og ZANB
Optun, Mette
Master thesis
Regresjonsmodeller av Poisson- og negativ binomisk fordeling blir ofte brukt til å utføre regresjonsanalyse på datasett innen medisin, biologi, økonomi og mange andre fagfelt. Det oppstår ofte tilfeller der andelen av verdien null i datasettet ikke samstemmer med den som er forventet når det antas at observasjonene enten er negativ binomisk (NB)- eller Poissonfordelt. Et eksempel på slike kan være antall innrapporterte skademeldinger et år fra en gruppe forsikringstakere, som grunnet egenandel vil inneholde flere observasjoner av null enn forventet. Antall dager en pasient er innlagt på sykehus, der kun innlagte pasienter er tatt med i beregningen, er et annet eksempel på at forventet antall nullobservasjoner ikke vil være lik forventet antall fra NB- eller Poisson fordelte data.
Denne oppgaven vil ta for seg to modeller som er laget for å tilpasse datasett som er antatt negativ binomisk fordelt, men som inneholder mange nullobservasjoner i tillegg.
Før en introduksjon av disse modellene, er det valgt å inkludere en del teori om negativ binomisk fordelinger og regresjonsmodeller som tar utgangspunkt i disse. Dette vil gi et godt grunnlag for å forstå oppbyggingen i fordelingene bak, og til modellene ZINB og ZANB, når disse deretter vil bli grundig forklart. Til slutt vil det undersøkes om valg av de to modellene er avgjørende for resultat etter tilpassing av ulike datasett, og hvilke konsekvenser bruk av enten ZINB eller ZANB som regresjonsmodell når den andre er den korrekte kan gi. I tillegg vil det vurderes om det generelt er mulig å se hvilken modell som er mest riktig ut ifra egenskaper i det observerte datasettet.
Wed, 06 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/76862013-11-06T00:00:00ZVolume preserving numerical integrators for ordinary differential equations
http://hdl.handle.net/1956/7563
Volume preserving numerical integrators for ordinary differential equations
Xue, Huiyan
Doctoral thesis
Fri, 08 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/75632013-11-08T00:00:00ZGenerating function and volume preserving mappings
http://hdl.handle.net/1956/7562
Generating function and volume preserving mappings
Xue, Huiyan; Zanna, Antonella
Peer reviewed
In this paper, we study generating forms and generating functions
for volume preserving mappings in Rn. We derive some parametric classes
of volume preserving numerical schemes for divergence free vector fields. In
passing, by extension of the Poincar´e generating function and a change of
variables, we obtained symplectic equivalent of the theta-method for differential
equations, which includes the implicit midpoint rule and symplectic Euler A
and B methods as special cases.
Sat, 01 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/75622014-03-01T00:00:00ZHigh order volume preserving integrators for three kinds of divergence-free vector fields via commutators
http://hdl.handle.net/1956/7561
High order volume preserving integrators for three kinds of divergence-free vector fields via commutators
Xue, Huiyan
Journal article
In this paper, we focus on construction of high order volume preserving integrators
for divergence-free vector fields of the monomial basis, exponential basis and tensor
product of the monomial and exponential basis. We first prove that the commutators
of elementary divergence-free vector fields (EDFVF) of these three kinds are
still divergence-free vector fields of the same kind. For EDFVFs of these three kinds,
we construct high order volume preserving integrators using the multi-commutators.
Moreover, we consider ordering of EDFVFs and their commutators to reduce the error
of the schemes, showing by numerical tests that the strategies in [8] work well.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/75612013-01-01T00:00:00ZDetecting Periodic Elements in Higher Topological Hochschild Homology
http://hdl.handle.net/1956/7560
Detecting Periodic Elements in Higher Topological Hochschild Homology
Veen, Torleif
Doctoral thesis
Tue, 20 Aug 2013 00:00:00 GMThttp://hdl.handle.net/1956/75602013-08-20T00:00:00ZImage processing, filtering and segmentation of data-sets for reservoir simulation
http://hdl.handle.net/1956/7543
Image processing, filtering and segmentation of data-sets for reservoir simulation
Gurholt, Tiril Pedersen
Doctoral thesis
Paper A: 2-D Visualisation of Unstable Waterflood and Polymer Flood for Displacement
of Heavy Oil. Full-text not available in BORA. The published version is available at: <a href="http://dx.doi.org/10.2118/154292-ms" target="blank">http://dx.doi.org/10.2118/154292-ms</a>; Paper B: Determination of Connectivity in Vuggy Carbonate Rock Using Image
Segmentation Techniques. Full-text not available in BORA.; Paper C: Pore space characterization of vuggy carbonate rocks: A comparative
study of the performance of various image segmentation techniques. Full-text not available in BORA.; Paper D: 3D Multiphase Piecewise Constant Level Set Method Based on Graph
Cut Minimization. Full-text not available in BORA.
Fri, 11 Oct 2013 00:00:00 GMThttp://hdl.handle.net/1956/75432013-10-11T00:00:00ZModelling Fluid Flow and Heat Transport in Fractured Porous Media
http://hdl.handle.net/1956/7509
Modelling Fluid Flow and Heat Transport in Fractured Porous Media
Lampe, Victor
Master thesis
Flow in porous media is an important and well-
researched topic in both academia and the
industry. More accurate mathematical models are
constantly sought after.
Fractures in a porous medium can be an important
contributor
to the overall dynamics of a porous system, and
several conceptual models for fractured porous
media has been proposed
over the years.
We investigate a selection of these models, by
starting
with the basics of non-fractured porous media.
We discuss the effects of fractures, concerning
both single-phase fluid flow and heat transport,
and review
a few different approaches for the modelling of
fractured porous media.
Three different models, and equivalent continuum
model, a dual continuum model and a discrete
fracture model, are compared
through analysis and numerical experiments.
Our simulations show that the equivalent continuum
model is unreliable and coming up short to the
other two, from which we obtain comparable
results.
Tue, 27 Aug 2013 00:00:00 GMThttp://hdl.handle.net/1956/75092013-08-27T00:00:00ZAuxiliary variables for 3D multiscale simulations in heterogeneous porous media
http://hdl.handle.net/1956/7429
Auxiliary variables for 3D multiscale simulations in heterogeneous porous media
Sandvin, Andreas; Keilegavlen, Eirik; Nordbotten, Jan Martin
Journal article
<p>The multiscale control-volume methods for solving problems involving flow in porous media have gained
much interest during the last decade. Recasting these methods in an algebraic framework allows one to
consider them as preconditioners for iterative solvers. Despite intense research on the 2D formulation,
few results have been shown for 3D, where indeed the performance of multiscale methods deteriorates. The
interpretation of multiscale methods as vertex based domain decomposition methods, which are non-scalable
for 3D domain decomposition problems, allows us to understand this loss of performance.</p><p>We propose a generalized framework based on auxiliary variables on the coarse scale. These are enrichments
the coarse scale, which can be selected to improve the interpolation onto the fine scale. Where the
existing coarse scale basis functions are designed to capture local sub-scale heterogeneities, the auxiliary variables
are aimed at better capturing non-local effects resulting from non-linear behavior of the pressure field.
The auxiliary coarse nodes fits into the framework of mass-conservative domain-decomposition (MCDD)
preconditioners, allowing us to construct, as special cases, both the traditional (vertex based) multiscale
methods as well as their wire basket generalization.</p>
Mon, 01 Apr 2013 00:00:00 GMThttp://hdl.handle.net/1956/74292013-04-01T00:00:00ZA unified multilevel framework of upscaling and domain decomposition
http://hdl.handle.net/1956/7417
A unified multilevel framework of upscaling and domain decomposition
Sandvin, Andreas; Nordbotten, Jan Martin; Aavatsmark, Ivar
Conference object
We consider multiscale preconditioners for a class of mass-conservative
domain-decomposition (MCDD) methods. For the application of reservoir simulation,
we need to solve large linear systems, arising from finite-volume discretisations of elliptic
PDEs with highly variable coefficients. We introduce an algebraic framework, based on
probing, for constructing mass-conservative operators on a multiple of coarse scales. These
operators may further be applied as coarse spaces for additive Schwarz preconditioners.
By applying different local approximations to the Schur complement system based on a
careful choice of probing vectors, we show how the MCDD preconditioners can be both
efficient preconditioners for iterative methods or accurate upscaling techniques for the
heterogeneous elliptic problem. Our results show that the probing technique yield better
approximation properties compared with the reduced boundary condition commonly
applied with multiscale methods.
Presented at CMWR 2010 - XVIII International Conference on Computational Methods in Water Resources, June 21-24, 2010, Barcelona, Spain
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/74172010-01-01T00:00:00ZRobust Multiscale Control-volume Methods for Reservoir Simulation
http://hdl.handle.net/1956/7416
Robust Multiscale Control-volume Methods for Reservoir Simulation
Sandvin, Andreas
Doctoral thesis
Mon, 30 Apr 2012 00:00:00 GMThttp://hdl.handle.net/1956/74162012-04-30T00:00:00ZMass Conservative Domain Decomposition for Porous Media Flow
http://hdl.handle.net/1956/7415
Mass Conservative Domain Decomposition for Porous Media Flow
Nordbotten, Jan Martin; Keilegavlen, Eirik; Sandvin, Andreas
Chapter
Wed, 28 Mar 2012 00:00:00 GMThttp://hdl.handle.net/1956/74152012-03-28T00:00:00ZEnergieffektivitet i grunne geotermiske systemer. Modellering og analyse av systemet på Ljan skole.
http://hdl.handle.net/1956/7109
Energieffektivitet i grunne geotermiske systemer. Modellering og analyse av systemet på Ljan skole.
Straalberg, Eirik Ask
Master thesis
Grunnvarme er en stadig viktigere og mer anvendt energikilde i en verden med et økende energibehov, og det er i den forbindelse viktig å utforme systemer som kan utnytte energien på en effektiv måte. Denne oppgaven tar for seg det grunne og lukkede geotermiske anlegget på Ljan skole i Oslo, der en geotermisk varmepumpe forsynes med varme fra en geotermisk brønnpark og en asfaltert bakkesolfanger. Et hovedmål med denne oppgaven er å vurdere energektiviteten til dette systemet. Dette gjøres ved hjelp av simuleringsverktøyet TRNSYS, der det lages systemmodeller av anlegget. Spesielt studeres borehullsbasert lagring av solvarme, og det vurderes om dette er en energieffektiv løsning. For verifisering av modeller og resulater sammenlignes enkelte simuleringsresulater med driftsdata fra Ljan skole.
Det konkluderes med at systemet på Ljan skole er energieffektivt, og beregningene tyder på at det geotermiske systemet gir nesten halverte fyringskostnader sammenlignet med et elektrisk eller oljefyrbasert oppvarmingsystem. Systemet ser imidlertid ikke ut til å være tjent med borehullsbasert energilagring. Simuleringer uten energilagring gir enda høyere energieffektivitet enn simuleringer med energilagring, også på lang sikt.
Mon, 17 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/71092013-06-17T00:00:00ZSyntetisk CDO: iTraxx-prising ved bruk av en Normal Invers Gaussisk Copula
http://hdl.handle.net/1956/7076
Syntetisk CDO: iTraxx-prising ved bruk av en Normal Invers Gaussisk Copula
Nordanå, Kjetil Sørlien
Master thesis
I denne oppgaven skal vi se at det er et stort
marked for kredittderivater etter finanskrisen,
men da i en vridning mot syntetiske varianter. Men
framtidsutsiktene er usikre, da disse produktene
er i en
særstilling når det gjelder nye reguleringer.
Likevel fører kredittderivater med seg store
muligheter for å
flytte på risiko, slik at deltakere i markedet kan
bli avlastet endel i sine porteføljer. For
at markedsdeltakere som enten allerede er i en
eksisterende posisjon, eller vurderer
investeringer
på slike derivater, lettere kan gjøre opp
risikovurderinger er det viktig med modeller som
kan
reprodusere markedspriser med små avvik på en
tidseffektiv måte.
Gjennom oppgaven blir derfor tre modeller
presentert og prøvd ut: Gaussisk-LHP, Student
t-copula og NIG-LHP på den transjerte iTraxx
Europe-indeksen. Vi vil da se at det kun er
sistnevnte som på en effektiv måte reproduserer
markedspriser tilfredsstillende hvorpå den alltid
priser junior messanintransjen korrekt. Årsakene
til dette er de andre modellenens manglende
evne til å produsere korrelasjonssmilet observert
i markedet. I tillegg har dem kun en parameter,
korrelasjonen, som ikke er nok for å produsere en
slik kompleks struktur som syntetisk CDO
er. NIG har flere (intuitive) parametere i tillegg
til korrelasjonen og dermed føyer seg lettere til
markedspriser. I tillegg er NIG-modellen vi
presenterer semi-analytisk slik at den bruker kort
tid på å beregne. Gjennom oppgaven skal vi
utforske dens evne til å reprodusere markedspriser
for flere forskjellige tidsperioder, hvor vi
betraktet perioden under finanskrisen som svært
vanskelig, grunnet den lave likviditeten i
markedet. Med den pågående gjeldskrisen i de
europeiske landene, kan vi vente oss en stadig
ettersprøsel etter kredittderivater på europeiske
selskaper, slik at det er spennende å følge
utviklingen framover.
Thu, 04 Apr 2013 00:00:00 GMThttp://hdl.handle.net/1956/70762013-04-04T00:00:00ZSquare-free modules and ideals: Brill–Noether theory, polarizations, and deformations.
http://hdl.handle.net/1956/6998
Square-free modules and ideals: Brill–Noether theory, polarizations, and deformations.
Lohne, Henning
Doctoral thesis
Fri, 24 May 2013 00:00:00 GMThttp://hdl.handle.net/1956/69982013-05-24T00:00:00ZBrill–Noether theory of squarefree modules supported on a graph
http://hdl.handle.net/1956/6994
Brill–Noether theory of squarefree modules supported on a graph
Fløystad, Gunnar; Lohne, Henning
Journal article; Peer reviewed
We investigate the analogy between squarefree Cohen-
Macaulay modules supported on a graph and line bundles on a
curve. We prove a Riemann–Roch theorem, we study the Jacobian
and gonality of a graph, and we prove Clifford’s theorem.
Wed, 01 May 2013 00:00:00 GMThttp://hdl.handle.net/1956/69942013-05-01T00:00:00ZAnalysis of multi-beam sonar echos of herring schools by means of simulation
http://hdl.handle.net/1956/6967
Analysis of multi-beam sonar echos of herring schools by means of simulation
Holmin, Arne Johannes
Doctoral thesis
<p>The synchronized behavior of large schools of fish can be a fascinating sight and has
caught the attention of researchers for decades. Schools of thousands, or even millions of
fish can seemingly function as a single entity, by mechanisms that are still not entirely
understood. Models describing the individual behavior (individual based models) have
been shown to predict certain schooling features such as predator avoidance, whereas
experiments with fish schools in tanks have revealed rules governing the interaction between
neighboring fish. However, the step from an individual based model or a school
of a limited number of fish in a tank experiment, to large free swimming schools in the
ocean, may include challenges with regards to the observation techniques and to conditions
of the environment that are not necessarily reproduced in the individual based
model or in the tank.</p> <p>The latest technological advances in underwater acoustic observation has introduced
the potential of observing schools of fish of size up to a few hundred meters by threedimensional
images generated by multi-beam sonars. The Simrad MS70 multi-beam
sonar provides true three-dimensional images at a temporal resolution down to approximately
one image per second, enabling observations of the dynamic behavior of large
fish schools in situ, at a spatial and temporal resolution that has not been previously
available. In this thesis, data from the MS70 sonar are analyzed by means of simulation,
and steps are taken towards establishing a link between the modeled behavior of an individual
and the observed behavior of real fish schools.</p> <p>The principal analytical tool utilized in the thesis is a simulation model developed by the
author (and co-authors), which simulates observations from multi-beam sonars based on
the positions, orientations, and acoustical properties of arbitrary groups of individual
fish, and the configuration and acoustical properties of the sonar and the environment.
The framework of the simulation model is presented in the first of three papers in the
thesis, along with examples of its use as implemented for the EK60 multi-frequency
echosounder, the ME70 multi-beam echosounder, and the MS70 multi-beam sonar, all
manufactured by Simrad. The simulation experiments shown in the first paper illustrate
for example the potential of the MS70 sonar to provide information about the behavior
of fish schools. Specifically, in one of the experiments, a herring school with original
mean heading perpendicular to the central sonar beams is modified to represent eight
different idealized orientation scenarios, obtained by rotating the fish in specific sections
of the school by 90° towards the sonar. This produced reduced backscatter in the sections
of the school which were rotated, due to the directionality of the scattering from
herring at the acoustic frequencies of the sonar, showing the potential for falsely interpreting
orientation changes as fish density changes, but also the potential for extracting information about the local orientation distribution of the school.</p> <p>In the second paper, the parameters and stochastic properties of the noise (defined
as all contributions to the received acoustic intensity not backscattered from targets)
present in data from the MS70 sonar and EK60 echosounder are estimated from passive
recording sequences, motivated by the following three potential uses: (1) subtraction
of noise from real data, (2) simulation of noise in synthetic data, and (3) to provide a
basis for the development of methods for segmentation of MS70 data of fish schools (applied
in the third paper). Particularly, a simulation experiment from the first paper is
repeated with addition of noise, in which the polarization (degree of alignment of the
individual fish) of a real school which had been circumnavigated by the vessel for several
rounds, was estimated by matching the total backscatter of the real school and the
total backscatter of simulated schools with a variety of polarizations and packing densities.
The experiment illustrates the effect of noise on the estimated polarization, and
the potential to infer packing density of the school from the simulation experiment.</p> <p>The estimated noise from the second paper is utilized in the third and final paper, in
the development and testing of a new segmentation method for multi-beam sonar data,
which is compared to an existing segmentation method implemented in the post processing
system LSSS (Large Scale Survey System). The new method applies a Bayesian
approach, where the cumulative distribution function (CDF) of the packing density of
omnidirectional targets (scattering equally in all directions) in a voxel is estimated based
on the observed data, the estimated noise, and a prior probability distribution of the
signal. The CDF is evaluated at a specified lower schooling threshold, and the resulting
probabilities that the packing density is below the schooling threshold is smoothed by a
Gaussian kernel in the logarithmic domain (implying products of the probabilities in the
neighborhood around the voxel). Voxels for which the smoothed probability is below a
segmentation threshold, are identified to contain the school. The two segmentation methods
are tested on simulated data of 240 herring schools of various shapes, sizes, packing
densities, and depths, and compared with ground truth segmentation data generated
from the fish positions used as input to the simulation model to identify recommended
parameter settings for both methods, and to determine differences in the performance
between the methods. The new method is shown to produce estimates of the school extent,
total volume, total target strength (total echo), and mean volume backscattering
strength (mean echo density) which are generally closer to the corresponding theoretical
values estimated from the ground truth segmentation data.</p>
Fri, 31 May 2013 00:00:00 GMThttp://hdl.handle.net/1956/69672013-05-31T00:00:00ZSimulations of multi-beam sonar echos from schooling individual fish in a quiet environment
http://hdl.handle.net/1956/6958
Simulations of multi-beam sonar echos from schooling individual fish in a quiet environment
Holmin, Arne Johannes; Handegard, Nils Olav; Korneliussen, Rolf J.; Tjøstheim, Dag
Journal article; Peer reviewed
A model is developed and demonstrated for simulating echosounder and sonar observations of fish
schools with specified shapes and composed of individuals having specified target strengths and
behaviors. The model emulates the performances of actual multi-frequency echosounders and
multi-beam echosounders and sonars and generates synthetic echograms of fish schools that can be
compared with real echograms. The model enables acoustic observations of large in situ fish
schools to be evaluated in terms of individual and aggregated fish behaviors. It also facilitates
analyses of the sensitivity of fish biomass estimates to different target strength models and their
parameterizations. To demonstrate how this tool may facilitate objective interpretations of
acoustically estimated fish biomass and behavior, simulated echograms of fish with different spatial
and orientation distributions are compared with real echograms of herring collected with a multibeam
sonar aboard the research vessel “G.O. Sars.” Results highlight the important effects of
fish-backscatter directivity, particularly when sensing with small acoustic wavelengths relative to
the fish length. Results also show that directivity is both a potential obstacle to estimating fish
biomass accurately and a potential source of information about fish behavior.
Sat, 01 Dec 2012 00:00:00 GMThttp://hdl.handle.net/1956/69582012-12-01T00:00:00ZComputer-aided proofs and algorithms in analysis
http://hdl.handle.net/1956/6817
Computer-aided proofs and algorithms in analysis
Bartha, Ferenc A.
Doctoral thesis
<p>The computational power has increased dramatically since the appearance of the first
computers, making them a vital tool in the analysis of dynamical systems. We present
further applications of those two basic ideas, namely interval arithmetic and automatic
differentiation, that address the question of the reliability of the results and the difficulty
of calculating derivatives.</p>
<p>In general, the result of a numerical calculation will be influenced by errors, since
the set of the numbers represented by the machine is finite. This will inevitably lead
to round-off and truncation errors. This should not be considered as a problem, but
rather as the true nature of numerics. The notorious examples like evaluating 333.75y6+
x2(11x2y2−y6−121y4−2)+5.5y8+x/(2y) at (x, y) = (77617,33096) or plotting the
polynomial t6 −6t5 +15t4 −20t3 +15t2 −6t +1 in a small neighborhood of 1, still
result in unexpected outcomes, if one is unaware of the potential risks of the floating
point computations. We mention the failure of a Patriot missile on February 25, 1991
or the explosion of the unmanned space rocket Ariane 5 on June 4, 1996 as practical
examples of these potential risks becoming real.</p>
<p>Therefore in mathematical proofs, where the beauty of the argument is its unquestionable
truth itself, the usage of computers must be handled with extreme care. One
technique, that is used to overcome these problems and make our computations rigorous,
is called interval arithmetic.</p>
<p>To calculate derivatives of a given function is often considered to be a hard problem,
since in general with increasing the order or the dimension, the complexity of the formula
of the derivative grows exponentially. The observation, that we do not need these
formulae in general, but only certain values of the derivatives, is crucial to understand
why automatic differentiation is so useful.</p>
<p>The structure of the thesis is as follows. In Part I we give an introduction to the
methods used in our papers. In Chapter 1 we get acquainted with the basic techniques,
interval arithmetic, interval analysis, floating point computations and automatic differentiation.
Chapter 2 gives an overview of the interaction between dynamical systems and different representations of the data. In Chapter 3 we take on the basic concept of automatic differentiation seen before, and present a method by Griewank et al. [17] to compute higher order derivatives of multivariate functions that will be used in Paper A. We go through the theory of graph representations in Chapter 4 by following the steps of Hohmann and Dellnitz [12] and Galias [15]. This theory may be used in qualitative analysis of maps. We give two applications in Paper B and Paper C. In addition, we
give the proof of correctness of the algorithm for enclosing non-wandering points in Paper B. In Chapter 5 we introduce the reader to the method of self-consistent bounds by
Zgliczy´nski and Mischaikow [44] and Zgliczy´nski [40, 42, 43] that may be used to analyze
a certain class of dissipative partial differential equations. An application of this
concept to a destabilized Kuramoto-Sivashinsky equation is given in Paper D. Chapter 6
gives a short overview of the results of the included papers.
Part II is the main scientific contribution of this thesis, consisting of the formerly
mentioned four papers.</p>
Fri, 14 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/68172013-06-14T00:00:00ZIll Posedness Results for Generalized Water Wave Models
http://hdl.handle.net/1956/6802
Ill Posedness Results for Generalized Water Wave Models
Teyekpiti, Vincent Tetteh
Master thesis
In the first part of the study, the weak asymptotic method is used to find singular solutions of the shallow water system in both one and two space dimensions. The singular solutions so constructed are allowed to contain Dirac-delta; distributions (Espinosa & Omel'yanov, 2005). The idea is to con- struct complex-valued approximate solutions which become real-valued in the distributional limit. The approach, which extends the range f possible singular solutions, is used to construct solutions which contain combinations of hyperbolic shock waves and Dirac-delta; distributions. It is shown in the second part that the Cauchy problem for Korteweg-de Vries (KdV) type equations is locally ill-posed in a negative Sobolev space. The method is used to construct a solution which does not depend continuously on its initial data in H^{s_epsilon}, s_epsilon = -1/2 - epsilon
Mon, 03 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/68022013-06-03T00:00:00ZThe Norwegian Stock Market: - A Local Gaussian Perspective
http://hdl.handle.net/1956/6787
The Norwegian Stock Market: - A Local Gaussian Perspective
Lura, Andreas
Master thesis
In this thesis, using daily returns from 18
stocks, oil price, exchange rates and the main
index of the Oslo Stock Exchange over a period of
5 years, we investigate how the Local Gaussian
Correlation can be used
to describe the change in the relationship between
stocks and the market and how it can extend
already established theory
in finance.
Topics covered in this thesis are; risk estimation
by conventional risk measures and a method based
on Local Gaussian Correlation, the Capital Asset
Pricing Model (CAPM), copulas and GARCH as a
description of volatility and as a description of
the marginal distributions for copulas.
Value at Risk and Expected Shortfall are well
established risk measurements in finance.
They are dependent on a good description for the
distribution in the tail, which can be
challenging. These measures only provide one
single number as a description of the risk, this
might be appealing, but does not really provide
detailed information.
By using the theory of CAPM there has been some
attempt to describe the change in risk by using
the so-called conditional moments of the
observations. This approach might be biased, as
the conditional moment
fails to describe the constant correlation and
variances of the Gaussian distribution.
By rather using the local parameters found when
calculating the Local
Gaussian Correlation as a local description of the
beta on our data, there seem to be higher risk in
the upper tail and the lower than in the middle.
However, what differs from the results found by
the previously mentioned approach is that the risk
in the upper tail seems to be higher than in the
lower. This might be explained by very large gains
for the stock market might be followed by a
possible stock market downturn or
even a crash (bubble), while negative values for
the market is less likely to resolve in a sudden
positive boost for the market.
Mon, 03 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/67872013-06-03T00:00:00ZDCC-GARCH modeller med ulike avhengighetsstrukturer
http://hdl.handle.net/1956/6786
DCC-GARCH modeller med ulike avhengighetsstrukturer
Aardal, Helene
Master thesis
Hovedfokuset i denne oppgaven er å finne gode metoder for modellering av volatilitet og avhengighetsstruktur i finansielle porteføljer. En spesifikk multivariat GARCH modell, Dynamic Conditional Correlation (DCC-) GARCH, kombineres med copulaer og par-copula-konstruksjoner for å få en mer fleksibel modell til å modellere nettopp dette.
GARCH-modeller er verktøy for å predikere og analysere volatiliteten i tidsrekker når denne varierer over tid, mens copulaer gir mulighet til å modellere avhengighetsstruktur og marginaler hver for seg.
Flere DCC-GARCH-modeller med ulike avhengighetsstrukturer implementeres i oppgaven; En Copula-DCC-GARCH-modell med multivariat Student t copula, en PCC-DCC-GARCH-modell med Student t-copula for alle par av variable og en PCC-DCC-GARCH-modell der par-copula-konstruksjonen består av både Clayton og Student t copulaer.
Thu, 14 Mar 2013 00:00:00 GMThttp://hdl.handle.net/1956/67862013-03-14T00:00:00ZSemiparametric model selection for copulas
http://hdl.handle.net/1956/6778
Semiparametric model selection for copulas
Jordanger, Lars Arne
Master thesis
This thesis will consider the performance of the cross-validation copula information criterion, xv-CIC, in the realm of finite samples.
The theory leading to the xv-CIC will be outlined, and an analysis will be conducted on an assorted collection of bivariate one-parameter copula models. The restriction to the bivariate case is not a grave one, since more complex d-variate samples can be broken down into a study of conditioned bivariate samples, by the methodology of regular vine-copulas, the pair copula construction and stepwise-semiparametric estimation of parameters.
As a by-product of our analysis, we can give an advice with regard to the selection of model selection method in the semiparametric realm.
Mon, 03 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/67782013-06-03T00:00:00ZGenerated Sound Waves in Corrugated Pipes
http://hdl.handle.net/1956/6693
Generated Sound Waves in Corrugated Pipes
Lillejord, Arve
Master thesis
This paper studies sound waves generated in corrugated pipes by using computational fluid dynam-
ics. The author did not have any prior experience in computational fluid dynamics and therefore
a substantial part was dedicated to theory in computational fluid dynamics. On this background,
this paper aimed to develop the necessary boundary conditions and solver settings in order to
simulate the problem using OpenFOAM. The main result consists of two cases, where the first
case studied the flow over a single corrugation and the second case a study of the flow through a
corrugated pipe. It was found that the first case was able to predict frequencies matching previous
work. The second case was not able to produce the features connected to singing in corrugated
pipes. This is largely attributed to how the inlet boundary condition was set.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/66932012-06-01T00:00:00ZMisoppfatningar i algebra på ungdomsskulen - ei diagnostisk tilnærming
http://hdl.handle.net/1956/6692
Misoppfatningar i algebra på ungdomsskulen - ei diagnostisk tilnærming
Hompland, Anne Berit
Master thesis
Denne oppgåva handlar om elevar i 10.klasse sitt arbeid med algebra. Meir spesifikt har eg
tatt utgangspunkt i ei kvalitativ tilnærming for å undersøka misoppfatningane dei har innan
emnet. Vidare har eg sett på korleis ein med dette utgangspunktet kan handsama misoppfatningane i undervisinga ved hjelp av diagnostisk arbeidsmetode. Med desse måla for
arbeidet valde eg å gjera eigne undersøkingar for å skaffa materiale.
Konstruktivistiske læringsteoriar utgjer det teoretiske rammeverket for oppgåva. Meir
spesifikt omhandlar det omgrep som misoppfatningar, diagnostiske oppgåver og diagnostisk undervising. Med dette som bakgrunn har eg tatt utgangspunkt i tidlegare forsking om
misoppfatningar innan algebra og nytta det i mine eigne undersøkingar.
Sjølv om det kvalitative aspektet var motivasjonen for å gå i gang med undersøkingane, valde eg å nytta metodetriangulering fordi det gav ei meir heilskapleg tilnærming til det eg ville undersøka. Dei kvalitative delane var intervju med tre elevar og diagnostisk undervising i ei gruppe på 14 elevar. Før dette hadde eg hatt ei kvantitativ kartleggingsprøve på heile 10.trinn. I etterkant gjennomførte eg ein ny kartleggingsprøve på gruppa med 14 elevar.
Resultata, både frå kartleggingsprøvane og intervjua, synte ei elevgruppe med store
utfordringar innan algebra, noko som gjenspeglar det både PISA- og TIMSS-undersøkingar frå tidlegare år alt har peika på. Tidlegare dokumenterte misoppfatningar, både frå norske og
utanlandske undersøkingar, synte seg hjå desse elevane. I arbeidet med diagnostisk
undervising retta eg fokuset direkte inn mot misoppfatningane for om mogeleg å bearbeida
dei. Kartleggingsprøva i etterkant synte eit svakt resultat med tanke på forbetringar. Men som
drøftinga i samband med dette syner, kan det ha andre orsakar enn akkurat mi tilnærming til
diagnostisk arbeid med algebra.
Avslutningsvis peiker eg på tre punkt som eg ser på som hovudutfordringar i arbeidet med
misoppfatningar i algebra; det aritmetiske grunnlaget, forståing av bokstavsymbola og elevane
sin bruk av intuitive strategiar. Sjølv om utvalet mitt ikkje er stort nok til å seia om dette er generelle tendensar, meiner eg å ha grunnlag for å seia at dette er utfordringar som er viktige for matematikklærarar å kjenna til.
Thu, 24 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66922012-05-24T00:00:00ZNivådeling i matematikk på ungdomsskolen - en kvalitativ undersøkelse
http://hdl.handle.net/1956/6691
Nivådeling i matematikk på ungdomsskolen - en kvalitativ undersøkelse
Tindeland, Anita
Master thesis
I Kunnskapløftet, K06, legges det stor vekt på tilpasset opplæring til hver enkelt elev. Flere ungdomsskoler har innført nivådeling i matematikk som et tiltak for å imøtekomme dette kravet. I Norge står enhetsskoletanken sterkt, og nivådeling er et svært omstridt tema. Både forskere og politikere har delte meninger på området. Jeg har, ved hjelp av det kvalitative forskningsintervjuet, sett nærmere på hvordan nivådeling gjennomføres på to skoler i bergensområdet. Problemstillingen jeg har holdt meg til er: Hvilke konsekvenser har det på det sosiale, personlige og organisatoriske plan, å nivådele matematikkundervisningen på ungdomstrinnet? I arbeidet med oppgaven har jeg gjort svært mange og komplekse funn. Jeg vil derfor ikke komme med et endelig svar for eller mot nivådeling. I teksten belyser jeg hvilke positive og negative konsekvenser nivådeling har for elever og læreres skolehverdag. Her har jeg blant annet sett nærmere på elever og læreres oppfatning av ordningens innvirkning på mobbing, sosial status, motivasjon, mestring og forsvarstrategier, organisering av gruppeinndeling og undervisning. Jeg finner at det er et forbedringspotensiale ved de ordningene jeg har undersøkt. Spesielt gjelder dette hvordan undervisningen legges opp på de ulike gruppene, og hvordan mulighetene for tilpasning og variasjon utnyttes. Vellykket nivådeling - om så finnes - avhenger av en rekke faktorer; elevmassen, lærerne, tilgjengelige ressurser og hvilke undervisningsmetoder som brukes, for å nevne noe. I denne studien har jeg belyst noen av konsekvensene av en slik ordning ved to spesifikke skoler. Jeg finner at det er både positive og negative sider ved de undersøkte ordningene, og jeg presenterer avslutningsvis noen forbedringsforslag som muligens kan bidra til å gjøre dem enda bedre.
Wed, 30 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66912012-05-30T00:00:00ZAdaptive parameterization of electric conductivity in inversion of electromagnetic data
http://hdl.handle.net/1956/6687
Adaptive parameterization of electric conductivity in inversion of electromagnetic data
Ek, Torbjørn Helland
Master thesis
We describe a methodology developed for 3-D parameter identification, with focus on large-scale applications such as monitoring subsea oil production and geothermal systems. The methodology is designed to handle challenges related to low parameter sensitivity, nonuniqueness of the inverse solutions and costly numerical calculations. A reduced, composite parameter representation is chosen to meet these challenges. Our contributions to the methodology involves choosing a reduced representation with radial basis functions, to maintain a low number of parameters. We also propose the use of a 1. order selection measure in the refinement process to reduce the computational costs. The performance of the proposed changes in the methodology is illustrated in a series of examples for estimating the change in electric conductivity from time-lapse electromagnetic observations. The results show some limitetions regarding the accuracy of the 1. order selection measure. For the investigated numerical examples, radial basis functions, together with the described methodology, effectively provide an estimated of the electric conductivity field using electromagnetic measurements.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/66872012-06-01T00:00:00ZGonality of Points in Brill-Noether Loci of the Moduli Space of Curves
http://hdl.handle.net/1956/6686
Gonality of Points in Brill-Noether Loci of the Moduli Space of Curves
Sæbø, Gard
Master thesis
In this masterthesis we study the geometry of the
points in the Brill-Noether locus. Typically, we
want to study the gonality of points in this
locus, that can be seen as projective curves. We
also study what curves on K3s, which corresponds
to smooth curves in the Hilbert scheme.; I denne masteroppgaven studerer vi geometrien til
punktene i Brill-Noether lokuset. Et typisk
problem er å finne gonaliteten til disse punktene,
som kan bli sett på som projektive kurver. Vi skal
også studere hvilke kurver på K3 flater som
korresponderer til glatte punkter i
Hilbertskjemaet.
Thu, 31 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66862012-05-31T00:00:00ZImpact of compressibility in vertically integrated models for CO2 storage
http://hdl.handle.net/1956/6685
Impact of compressibility in vertically integrated models for CO2 storage
Reistad, Silje Rognsvåg
Master thesis
Recent work have shown that simplified models obtained by vertical integration give reasonable approximations for CO2 migration. Most of the work previously done on this topic assumes that CO2 is an incompressible fluid, or they work with a quasi compressibility, where they consider horizontal compressibility but not vertical. However, CO2 is highly compressible and large variations in CO2 density can be expected due to pressure increases near the injection well. In addition, long term CO2 migration to shallower depths can result in significantly lower density as dictated by geothermal gradients and regional pressure conditions. In this study, we outline the mathematical and numerical approach to solve the coupled vertically integrated models with density variation. Mathematical models for CO2 storage are considered in which the equations for mass conservation and Darcy's law are vertically integrated and coupled with an equation of state for CO2 density. Based on these, a pressure equation is derived. This pressure equation is then discretised with the use of an implicit control volume method and a partially implicit Lax-Friedrichs' method. A solution approach based on an IMPES approach is suggested.
We have unfortunately not been able to solve this problem, as there has been some difficulties with the boundary values in the code, but we suggest a plan for further work for a working code.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/66852012-06-01T00:00:00ZCritical Measures and Parameter Space of Jenkins-Strebel Quadratic Differentials
http://hdl.handle.net/1956/6684
Critical Measures and Parameter Space of Jenkins-Strebel Quadratic Differentials
Frolova, Anastasia
Master thesis
The thesis is devoted to applications of the
theory of quadratic differentials
to the problems of construction
of critical measures, that provide critical values
of the logarithmic energy of a charge system on
the complex plane. We observe the properties of
supports of such measures. We consider the problem
of describing parameter space of Jenkins-Strebel
differentials, which is related to the problem of
describing critical measures.
Mon, 07 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66842012-05-07T00:00:00ZVascular responses to radiotherapy and androgen-deprivation therapy in experimental prostate cancer
http://hdl.handle.net/1956/6664
Vascular responses to radiotherapy and androgen-deprivation therapy in experimental prostate cancer
Røe, Kathrine; Mikalsen, Lars T. G.; Kogel, Albert J. van der; Bussink, Johan; Lyng, Heidi; Olsen, Dag R.
Peer reviewed; Journal article
<p>Background: Radiotherapy (RT) and androgen-deprivation therapy (ADT) are standard treatments for advanced prostate cancer (PC). Tumor vascularization is recognized as an important physiological feature likely to impact on both RT and ADT response, and this study therefore aimed to characterize the vascular responses to RT and ADT in experimental PC.</p> <p>Methods: Using mice implanted with CWR22 PC xenografts, vascular responses to RT and ADT by castration were visualized in vivo by DCE MRI, before contrast-enhancement curves were analyzed both semi-quantitatively and by pharmacokinetic modeling. Extracted image parameters were correlated to the results from ex vivo quantitative fluorescent immunohistochemical analysis (qIHC) of tumor vascularization (9 F1), perfusion (Hoechst 33342), and hypoxia (pimonidazole), performed on tissue sections made from tumors excised directly after DCE MRI.</p> <p>Results: Compared to untreated (Ctrl) tumors, an improved and highly functional vascularization was detected in androgen-deprived (AD) tumors, reflected by increases in DCE MRI parameters and by increased number of vessels (VN), vessel density ( VD), and vessel area fraction ( VF) from qIHC. Although total hypoxic fractions ( HF) did not change, estimated acute hypoxia scores ( AHS) – the proportion of hypoxia staining within 50 μm from perfusion staining – were increased in AD tumors compared to in Ctrl tumors. Five to six months after ADT renewed castration-resistant (CR) tumor growth appeared with an even further enhanced tumor vascularization. Compared to the large vascular changes induced by ADT, RT induced minor vascular changes. Correlating DCE MRI and qIHC parameters unveiled the semi-quantitative parameters area under curve ( AUC) from initial time-points to strongly correlate with VD and VF, whereas estimation of vessel size ( VS) by DCE MRI required pharmacokinetic modeling. HF was not correlated to any DCE MRI parameter, however, AHS may be estimated after pharmacokinetic modeling. Interestingly, such modeling also detected tumor necrosis very strongly.</p> <p>Conclusions: DCE MRI reliably allows non-invasive assessment of tumors’ vascular function. The findings of increased tumor vascularization after ADT encourage further studies into whether these changes are beneficial for combined RT, or if treatment with anti-angiogenic therapy may be a strategy to improve the therapeutic efficacy of ADT in advanced PC.</p>
Wed, 23 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66642012-05-23T00:00:00ZMultiscale simulation of flow and heat transport in fractured geothermal reservoirs
http://hdl.handle.net/1956/6593
Multiscale simulation of flow and heat transport in fractured geothermal reservoirs
Sandve, Tor Harald
Doctoral thesis
Mon, 04 Mar 2013 00:00:00 GMThttp://hdl.handle.net/1956/65932013-03-04T00:00:00ZLocal Likelihood
http://hdl.handle.net/1956/6583
Local Likelihood
Otneim, Håkon
Master thesis
Methods for probability density estimation are
traditionally classified as either parametric or
non-parametric. Fitting a parametric model to
observations is generally a good idea when we have
sufficient information on the origin of our data;
if not, we must turn to non-parametric methods,
usually at the cost of poorer performance.
This thesis discusses local maximum likelihood
estimation of probability density functions, which
can be regarded as a compromise between the two
mindsets. The idea is to fit a parametric model
locally, that is, to let the parameters and their
estimates depend on the location. If the chosen
model is close to the true, unknown density, we
keep much of the appealing properties of a full
parametric approach. On the other hand, local
likelihood density estimates have performance
comparable to well known non-parametric methods,
even though the locally fitted parametric model
differs from the true density in a global sense.
Although traditional methods withstand the test of
time as excellent options in many situations, the
local maximum likelihood estimator opens up a
range of applications. Hjort and Jones [1996], who
will serve as the main reference for this thesis,
call it semi-parametric density estimation, as it
is particularly useful when we have partial
knowledge on the shape of the unknown density, but
not enough to trust the ordinary, global maximum
likelihood estimates. Further, many have built on
the idea of locally parametric estimation to
applications beyond just density estimation, some
of whom have been mentioned and included as
references throughout the thesis.
One-dimensional density estimation will, however,
be the primary focus here, with particular
emphasis on large sample theory. The main results
concern asymptotic bias, which is shown to have a
larger order than the bias of traditional kernel
estimation as the sample size increases to
infinity, and the bandwidth decreases towards
zero. Nonetheless, in practical situations with
reasonable sample sizes, the local likelihood
estimator is shown to perform very well, with an
appealing robustness against under- and
oversmoothing. Indeed, no experiment performed
show signs of deterioration of local likelihood
estimates compared to kernel estimation as the
sample size grows.
Mon, 23 Apr 2012 00:00:00 GMThttp://hdl.handle.net/1956/65832012-04-23T00:00:00ZGAMLSS-modeller i bilforsikring
http://hdl.handle.net/1956/6582
GAMLSS-modeller i bilforsikring
Røyrane-Løtvedt, Hallvard
Master thesis
I denne oppgaven tester jeg ulike modeller for prediksjon av total skadeutbetaling fra forsikringsselskap til forsikringstaker i et poliseår. Modellene som testes hører til rammeverket Generalized Additive Models for Location, Shape and Scale - GAMLSS - introdusert av Rigby og Stasinopoulos (2001). Data brukt i oppgaven er hentet fra et norsk forsikringsselskap, og består av informasjon om poliser og skader i bilforsikring i årene 2000-2005. Ved hjelp av kun 3 forklaringsvariabler; årstall, bilalder og personalder, viser jeg i denne oppgaven at valg av statistisk modell er avgjørende for prediksjonene av skadeutbetalingen (kapittel 9). Videre tester jeg ut hvordan modellprediksjonene kan brukes til å lage en realistisk prismodell, og hvordan prismodellen gir ulike resultater for de ulike prediksjonsmodellene (kapittel 10).
Total skadeutbetaling deles naturlig inn i skadefrekvens og skadepris. Jeg tester i oppgaven både modeller som modellerer disse separat, og modeller som modellerer total skadeutbetaling direkte. Jeg vil argumentere for at de direkte modellene er å foretrekke. Modellen som anbefales er en Zero-Adjusted Inverse Gaussian - ZAIG-modell, der forklaringsvariablenes funksjonelle form er valgt slik at AIC blir så lav som mulig. En ZAIG-fordelt stokastisk variabel tar verdien 0 med sannsynlighet psi og følger en Invers-Gaussisk-fordeling med sannsynlighet (1-psi). Skadepriser er såpass skjevt fordelt at det må en ekstremt skjev sannsynlighetsfordeling, som den Invers-Gaussiske, til, for å beskrive dem. Jeg vil også i oppgaven argumentere for at valg av sannsynlighetsfordeling har stor betydning for kvaliteten på prediksjonene.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/65822012-06-01T00:00:00ZStatistiske modeller basert på skjulte Markovkjeder i kontinuerlig tid
http://hdl.handle.net/1956/6581
Statistiske modeller basert på skjulte Markovkjeder i kontinuerlig tid
Harjo, Lisbet Lien
Master thesis
I denne oppgaven har jeg tatt for meg situasjoner der observasjoner over tid kan tenkes å være generert ved en Markovkjede,
og studert innvirkningen av ulike kovariater på intensitetene, med både binære og tidavhengige kovariater.
Jeg har også sett på situasjoner der tilstandene i Markovkjeden kan være observert feil, slik at man ikke virkelig observerer
de underliggende tilstandene, men bare feilklassifiserte tilstander bestemt av en sannsynlighetsfordeling.
Til slutt har jeg også testet ut en aktuell pakke for programvaren \rkode{R} som er blitt utarbeidet for å kunne regne på denne typen problemstillinger.
Fri, 01 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/65812012-06-01T00:00:00ZCopulas and Local Gaussian Correlation
http://hdl.handle.net/1956/6580
Copulas and Local Gaussian Correlation
Nordbø, Tommy Neverdahl
Master thesis
We are looking at copula models and dependence measures. Especially the recently developed local dependence measure called local Gaussian correlation (LGC) is presented, and its connection with the copula theory is explored. Theoretical LGC values of some famous (and not so famous) copula models are derived, and used to make plots that shows the dependence structure of the copula. At the end we are outlining some ways of using this dependence measure for copula selection and goodness-of-fit tests.
Wed, 30 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/65802012-05-30T00:00:00ZPasser og linjal, origami og Galoisteori
http://hdl.handle.net/1956/6579
Passer og linjal, origami og Galoisteori
Vasdal, Thomas Arneberg
Master thesis
Denne oppgaven prøver å belyse konstruksjoner med passer og linjal og origami ved hjelp av Galois- teori. De tre klassiske problemene: Kubens fordobling, sirkelens kvadratur og vinkelens tredeling er sentrale. Oppgaven er myntet særlig på lærere med lektor kompetanse i matematikk. Det er en fordel å tatt introduksjonskurset i abstrakt algebra.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/65792011-06-01T00:00:00ZTestmetoder for identifisering av publikasjonsbias i metaanalyser
http://hdl.handle.net/1956/6578
Testmetoder for identifisering av publikasjonsbias i metaanalyser
Gjerdevik, Miriam
Master thesis
Testmetoder for identifisering av publikasjonsbias i metaanalyser.
Wed, 24 Oct 2012 00:00:00 GMThttp://hdl.handle.net/1956/65782012-10-24T00:00:00ZStatistical Approach to Relatedness Analysis in Large Collections of Genetic Profiles. An Application to a DNA-Registry of Fin Whales
http://hdl.handle.net/1956/6548
Statistical Approach to Relatedness Analysis in Large Collections of Genetic Profiles. An Application to a DNA-Registry of Fin Whales
Benonisdottir, Stefania
Master thesis
The present study utilized data from an Icelandic DNA-registry of fin whales by searching for pairs of relatives with a statistical approach. Three types of relatedness were of interest, half-siblings, parent-offspring and first cousins. Detection of relatives was done by computing pairwise LOD scores for the individuals in the sample for each relatedness of interest. The corresponding p-values for each LOD score were estimated by comparing the original LOD scores with LOD scores of simulated unrelated individuals. Due to very high number of pairwise comparisons, adjustment for multiple testing was necessary. Two well known multiple adjusting methods were applied and compared, the Bonferroni correction and the FDR procedure. The FDR procedure was found to be more suitable for this analysis since the Bonferroni procedure was too conservative for such a high number of LOD scores. Eight pairs of relatives were detected within the sample. When information about age and age of maturity had been taken into account, three of those pairs were classified as a parent-offspring pair and five were classified as half-siblings. One of the detected parent-offspring pairs were a male fin whale and a foetus, also detected as a father-offspring pair in a previous study.
Tue, 20 Nov 2012 00:00:00 GMThttp://hdl.handle.net/1956/65482012-11-20T00:00:00ZStill Water Performance Simulation of a SWATH Wind Turbine Service Vessel
http://hdl.handle.net/1956/6546
Still Water Performance Simulation of a SWATH Wind Turbine Service Vessel
Angeltveit, Rune
Master thesis
In this thesis I am making a computational fluid dynamics(CFD) simulation of a
SWATH (Small Waterplane Area Twin Hull) wind turbine service vessel moving
in still water at different speeds by using the CFD tool STAR-CCM+. Since I did
not have any prior experience in CFD, a substantial part of the thesis is dedicated
to theory in CFD. First of all theory for fluid dynamics and CFD methods are
described. Based on this theory, models and solvers for the simulation in STARCCM
+ are chosen with the required boundary conditions and initial values. A
major part of the simulation work is to obtain a good mesh before the solution is
achieved.
The paper consider the total hull resistance of the vessel at different speeds due
to pressure and shear forces. The results are compared with a still water performance
test for a scaled model. Wave making resistance is also considered in the
comparison. The resistance on the four holes in the hull, where the ballast tanks
are placed, are compared with the resistance on the hull. Secondly the paper examines
how the water level inside the ballast tanks, which is open to sea at the
front and at the back, are affected at different speeds.
Tue, 20 Nov 2012 00:00:00 GMThttp://hdl.handle.net/1956/65462012-11-20T00:00:00ZModified action and differential operators on the 3-D sub-Riemannian sphere
http://hdl.handle.net/1956/6534
Modified action and differential operators on the 3-D sub-Riemannian sphere
Chang, Der-Chen; Markina, Irina; Vasiliev, Alexander
Peer reviewed; Journal article
Our main aim is to present a geometrically meaningful formula for the fundamental
solutions to a second order sub-elliptic differential equation and to the heat equation associated with
a sub-elliptic operator in the sub-Riemannian geometry on the unit sphere S3. Our method is based
on the Hamiltonian-Jacobi approach, where the corresponding Hamitonian system is solved with
mixed boundary conditions. A closed form of the modified action is given. It is a sub-Riemannian
invariant and plays the role of a distance on S3.
Wed, 01 Dec 2010 00:00:00 GMThttp://hdl.handle.net/1956/65342010-12-01T00:00:00ZOptimal dividend policies for a class of growth-restricted di usion processes under transaction costs and solvency constraints
http://hdl.handle.net/1956/6226
Optimal dividend policies for a class of growth-restricted di usion processes under transaction costs and solvency constraints
Bai, Lihua; Hunting, Martin; Paulsen, Jostein
Peer reviewed; Journal article
In this paper, we consider a company where surplus follows a rather general di usion
process and whose objective is to maximize expected discounted dividend payments. With
each dividend payment there are transaction costs and taxes and it is shown in [7] that
under some reasonable assumptions, optimality is achieved by using a lump sum dividend
barrier strategy, i.e. there is an upper barrier u and a lower barrier u so that whenever
surplus reaches u , it is reduced to u through a dividend payment. However, these optimal
barriers may be unacceptably low from a solvency point of view. It is argued that in that
case one should still we should look for a barrier strategy, but with barriers that satisfy a
given constraint. We propose a solvency constraint similar to that in [6]; whenever dividends
are paid out the probability of ruin within a xed time T and with the same strategy in
the future, should not exceed a predetermined level ". It is shown how optimality can be
achieved under this constraint, and numerical examples are given.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/62262012-01-01T00:00:00ZExistence of a classical solution of a parabolic PIDE associated with ruin probability
http://hdl.handle.net/1956/6225
Existence of a classical solution of a parabolic PIDE associated with ruin probability
Hunting, Martin
Preprint
In this article we will prove existence of a classical solution of the
integro-differential equation for ruin probability in finite time stated in
Paulsen (2008).
Mon, 18 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/62252012-06-18T00:00:00ZA numerical approach to ruin probability in finite time for fitted models with investment
http://hdl.handle.net/1956/6224
A numerical approach to ruin probability in finite time for fitted models with investment
Hunting, Martin
Preprint
In this paper we present a numerical method for solving a partial
integro-differential equation (PIDE) associated with ruin probability, when
the surplus is continuously invested in stochastic assets. The method uses
precalculated Gaussian quadrature rules for the numerical integration.
Except for the numerical integration part, the method is based largely
on the finite differences method used in Halluin et al. (2005) for a PIDE
associated with a more general option pricing problem. In our numerical
examples we use historical data for inflation and returns on U.S. Treasury
bills, U.S. Treasury bonds and American stocks. The log-returns of the
investments are adjusted for an assumed constant force of inflation. We
consider four different strategies for continuous investment: (a) U.S. Treasury
bills with a constant maturity of 3 months, (b) U.S. Treasury bonds
with a constant maturity of 10 years, and (c) the Standard and Poor 500
index and (d) another index of American stocks. For each of these strategies
a geometric Brownian motion process is fitted to the aforementioned
historical data. The results suggest that the ruin probabilities obtained
can vary substantially, depending on whether the models are fitted to data
for the last decade or for a longer time period. We also discuss numerical
solution of investment models with jumps.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/62242012-01-01T00:00:00ZOptimal dividend policies with transaction costs for a class of jump-diffusion processes
http://hdl.handle.net/1956/6214
Optimal dividend policies with transaction costs for a class of jump-diffusion processes
Hunting, Martin; Paulsen, Jostein
Peer reviewed; Journal article
This paper addresses the problem of finding an optimal dividend policy for a class of jump-diffusion processes. The jump component is a compound Poisson process with negative jumps, and the drift and diffusion components are assumed to satisfy some regularity and growth restrictions. Each dividend payment is changed by a fixed and a proportional cost, meaning that if ξ is paid out by the company, the shareholders receive kξ−K, where k and K are positive. The aim is to maximize expected discounted dividends until ruin. It is proved that when the jumps belong to a certain class of light-tailed distributions, the optimal policy is a simple lump sum policy, that is, when assets are equal to or larger than an upper barrier uˉ∗ , they are immediately reduced to a lower barrier u−∗ through a dividend payment. The case with K=0 is also investigated briefly, and the optimal policy is shown to be a reflecting barrier policy for the same light-tailed class. Methods to numerically verify whether a simple lump sum barrier strategy is optimal for any jump distribution are provided at the end of the paper, and some numerical examples are given.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/62142012-01-01T00:00:00ZRuin probability
and optimal dividend policy for models
with investment
http://hdl.handle.net/1956/6213
Ruin probability
and optimal dividend policy for models
with investment
Hunting, Martin
Doctoral thesis
<p>In most countries the authorities impose capital requirements on insurance companies in order to avoid the adverse consequences to society when insurance companies default on claims. Since holding capital is costly, this naturally leads to the problem of deciding how large the risk reserve needs to be, or what is a "safe" level of liquidity. A common answer is that the probability that the insurance company will default on policyholder claims should not be higher than a certain small level Є. An implementation of this policy requires reasonably
accurate methods for determining this probability, known as the ruin probability.</p>
<p>Rigorous mathematical treatments of the ruin probability problem can be traced at least as far back as the acclaimed doctoral thesis of Filip Lundberg from 1903 with the title "Approximerad framställning af sannolikhetsfunktionen". Traditionally the focus has been on ruin probability on an infinite time horizon. In these models an insurance company can avoid ruin by allowing its risk reserve to grow toward infinity. At the 15th International Congress of Actuaries in 1957 Bruno de Finetti criticized this approach. In particular he couldn't see why
an older company should hold more capital than a younger one bearing similar risks, only because it is older. As an alternative de Finetti formulated what is known as the "de Finetti’s dividend problem": Maximizing the expected sum of the discounted paid out dividends from time zero until ruin. Since then several papers have presented solutions to this problem for various risk processes. Two of the papers in this thesis, which we denote Paper A and Paper B, focus on de Finetti’s dividend problem, with the risk process following a general diffusion and a jump-diffusion process, respectively. These models are particularly relevant for insurance companies where the premium income is invested in assets with stochastic returns. In keeping with de Finetti’s original paper, where ruin probability played a central role, Paper A also discusses solutions of de Finetti’s dividend problem under solvency constraints.</p>
<p>In the last few decades a growing number of papers have focused on ruin probability on a finite time horizon. For short time spans the assumption that the risk reserve is allowed to grow freely is less spurious. An important tool for calculating the ruin probability on a finite horizon is solving certain partial integro-differential equations (PIDEs). The third paper, denoted Paper C, discusses how these PIDEs can be solved numerically. The last paper, denoted Paper D, discusses regularity properties for some of these PIDEs.</p>
In the electronic version of the thesis the published version of paper I has been replaced with the accepted version.
Tue, 06 Nov 2012 00:00:00 GMThttp://hdl.handle.net/1956/62132012-11-06T00:00:00ZGasslekkasjar i Marine miljø
http://hdl.handle.net/1956/6052
Gasslekkasjar i Marine miljø
Aase, Stig Andre
Master thesis
I denne masteroppgåva studerast fluidstrøymingar gjennom marine sediment, og målet er å vurdere om ein enkel matematisk modell klarar å beskrive og bevare karakteristiske trekk for prosessen. Denne drøftinga vil baserast på samanlikning med dei få reelle data vi har til rådigheit. I denne samanhengen er det valt å ta utgangspunkt i Nyeggaregionen i Norskehavet, der den store tettleiken av krater på havbotnen ("pockmarks") vitnar om mykje vertikal fluidrørsle. Det vil i denne oppgåva også bli gjeve ei innføring i den Genetiske Algoritme, Monte Carlo-metodar og parallellisering av Matlab-kodar, som alle er sentrale verkty nytta for å svare på den gjeldande problemstillinga.
Tue, 19 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/60522012-06-19T00:00:00ZNonholonomic geometry on finite and infinite dimensional Lie groups and rolling manifolds
http://hdl.handle.net/1956/5784
Nonholonomic geometry on finite and infinite dimensional Lie groups and rolling manifolds
Grong, Erlend
Doctoral thesis
Part I: We start by giving some background on the topics discussed in this thesis.
The main topic of the thesis is nonholonomic geometry. In Chapter 1 we give an
introduction of nonholonomic geometry in the context of geometric control theory. In
a brief exposition, we try to give an overview of the areas of sub-Riemannian and
sub-Lorentzian geometry, stating several of the most important results in this area. A
historical account concludes this chapter.
Chapters 2 and 3 consist of mathematical prerequisits for the later presented results.
However, these chapters mainly focus on certain selected facts, rather than trying to give
an overview of a whole topic. Chapter 2 contains some results from differential geometry
related to submersions and geodesic curvatures. Chapter 3 gives introductory remarks
on the convenient calculus of infinite dimensional manifolds.
Chapter 4, the last chapter in part I, gives a short presentation and summary of the
main results of the papers included in Part II. We first present the results of Paper
B, regarding sub-Riemannian and sub-Lorentzian geometry on the universal cover of
SU(1, 1). The results in Papers C, D and F are then considered, which concern the
nonholonomic dynamical system of two manifolds rolling on each other without twisting
or slipping. Finally, we present some results in infinite dimensional manifolds in Paper
A and Paper F. In particular, Paper F contains a generalization of sub-Riemannian
geometry to the infinite dimensional setting.
Part I ends with the bibliography of the 4 first chapters.
Part II: Here, six papers are included, Papers A to F. Papers are listed in chronological
order according to their date of completion. Two of them are published, one is accepted
for publication and three are submitted.
Fri, 30 Mar 2012 00:00:00 GMThttp://hdl.handle.net/1956/57842012-03-30T00:00:00ZBoundary-value problems and shoaling analysis for the BBM equation
http://hdl.handle.net/1956/5772
Boundary-value problems and shoaling analysis for the BBM equation
Senthilkumar, Amutha
Master thesis
In this thesis we study the BBM equation
u_t+u_x+ \frac{3}{2}uu_x - \frac{1}{6} u_{xxt}= 0
which describes approximately the two-dimensional propagation of surface waves in a uniform horizontal channel containing an incompressible and inviscid fluid which in its undisturbed state has depth $h$. Here $u(x,t)$ represents the deviation of the water surface from its undisturbed position, and the flow is assumed to be irrotational.
The BBM equation features a bounded dispersion relation (Benjamin, Bona and Mahony ). We utilize this boundedness to prove existence, uniqueness and regularity results for solutions of the BBM equation supplemented with an initial condition and various types of boundary conditions. We also treat the water-wave problem over an uneven bottom. In particular, we consider two
different models for the propagation of long waves in channels of decreasing depth and we provide both analytical and numerical results for these models. For the numerical simulation we use a spectral discretization coupled with a four-stage Runge-Kutta time integration scheme.
After verifying numerically that the algorithm is fourth-order accurate in time,
we run the solitary wave with uneven bottom and examine how solitary waves respond to this non-uniform depth. Our numerical simulations are compared with previous numerical and experimental results of Madsen and Mei and Peregrine.
Mon, 20 Feb 2012 00:00:00 GMThttp://hdl.handle.net/1956/57722012-02-20T00:00:00ZA study in the effects of different treatments on the growth of high fluorescence virus.
http://hdl.handle.net/1956/5771
A study in the effects of different treatments on the growth of high fluorescence virus.
Maharjan, Bikram
Master thesis
Thu, 01 Sep 2011 00:00:00 GMThttp://hdl.handle.net/1956/57712011-09-01T00:00:00ZThe Flow of Quasiconformal
Mappings on S³ with Contact
structure and a Family of Surfaces on
the Heisenberg Group
http://hdl.handle.net/1956/5770
The Flow of Quasiconformal
Mappings on S³ with Contact
structure and a Family of Surfaces on
the Heisenberg Group
Lavrichenko, Ksenia
Master thesis
Tue, 31 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/57702011-05-31T00:00:00ZGode rank 1 latticereglar av høg dimensjon
http://hdl.handle.net/1956/5769
Gode rank 1 latticereglar av høg dimensjon
Topphol, Vegard
Master thesis
Oppgåva omhandlar numeriske integrasjonsreglar av
trigonometrisk grad kalla latticereglar, med
særskild fokus på å utvikle og teste algoritmer
for å finne gode latticereglar av rank 1. For at
teksten skal kunne stå sjølvstendig har også
nødvendig teori blitt gjennomgått.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/57692011-06-01T00:00:00ZReactive Transport in Porous Media
http://hdl.handle.net/1956/5768
Reactive Transport in Porous Media
Sævik, Pål Næverlid
Master thesis
In this thesis, we show how the common equations for flow in porous media can be expanded to account for geochemical reactions. Furthermore, the complications arising when solving the new equations numerically are described and explained. Specialised methods that alleviate the difficulties are then introduced, and discussed with respect to robustness and convergence properties. Much attention is directed to ways of reformulating the equations, in order to make them more amenable to numerical treatment. Also, the fact that chemical reactions introduce stiffness to the system is adressed. Diagonally implicit Runge-Kutta methods, which are commonly used to combat stiffness, are evaluated with respect to their usefulness in CO2 sequestration simulations. Finally, we have applied the methods to several test cases, some including complex mineralogies, to illustrate the strengths and weaknesses of the different numerical approaches.
Mon, 01 Aug 2011 00:00:00 GMThttp://hdl.handle.net/1956/57682011-08-01T00:00:00ZModulus method and its application to the theory of univalent functions
http://hdl.handle.net/1956/5767
Modulus method and its application to the theory of univalent functions
Belyaeva, Elena Vasilievna
Master thesis
This work is about the modulus method in univalent function theory. It is based on the notion of modulus of families of curves on Riemann surface and is turned out to be very useful in solving various extremal problems in conformal and quasiconformal mappings.
Mon, 23 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/57672011-05-23T00:00:00ZInjective braids, braided operads and double loop spaces
http://hdl.handle.net/1956/5766
Injective braids, braided operads and double loop spaces
Solberg, Mirjam
Master thesis
We construct the category of B-spaces, which is a braided monoidal diagram category. This category is Quillen equivalent to the category of simplicial sets. The induced equivalence of homotopy categories maps a commutative B-spaces monoid to a space that is weakly equivalent to a double loop space, if it is connected. If X is a connected space, we find a commutative B-space monoid, such that the homotopy colimit of it is weakly equivalent to doubleloops(doublesuspension(X)). Similarly we find a commutative B-space monoid that represents the nerve of a braided strict monoidal category.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/57662011-06-01T00:00:00ZComparison of distance measures in Multimodal registration of medical images
http://hdl.handle.net/1956/5765
Comparison of distance measures in Multimodal registration of medical images
Nyfløtt, Jon Kirkebø
Master thesis
In this thesis I have worked with medical image registration, in particular registration of multimodal images. I have tested the image registration software package FAIR developed by Jan Modersitzki, with particular focus on the choice of distance measure in image registration. I have tested the distance measures included in FAIR, in particular Normalized Gradient Field, the distance measure developed by Modersizki, and compared them.
In addition, I have developed a new distance measure called Normalized Hessian Fields and implemented it with the FAIR software. This has been compared to the other distance measures. I have also tested a combination of Normalized Gradient and Hessian Fields.
The registration has been performed on multimodal brain data and kidney data provided by Haukeland University Hospital, Bergen.
Tue, 14 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/57652011-06-14T00:00:00ZRobusthet ved skadereservering - Chain-Ladder og influens funksjoner
http://hdl.handle.net/1956/5761
Robusthet ved skadereservering - Chain-Ladder og influens funksjoner
Ones, Sindre
Master thesis
I denne oppgaven har jeg sett på hvordan uteliggere kan påvirke IBNR reserven i skadereservering. Jeg har studert influens funksjoner for den klassiske Chain-Ladder metoden, og videre sett at uteliggere kan påvirke Chain-Ladder estimatet for den totale reserven i stor grad. Jeg har videre tatt for meg den klassiske Chain-Ladder metoden, som er mye brukt ved IBNR reservering, og sett på en modifisering av metoden, slik at metoden skal fange opp uteliggere, og gi oss et innblikk i hva IBNR reserven vil være uten disse uteliggerne i datasettet. Dette har jeg gjort for tre ulike datasett, hvor resultatene ligger i kapittel 7. Ved å bruke den robuste Chain-Ladder metoden som et hjelpeverktøy, er det naturlig å sammenligne resultatene fra den robuste metoden mot den klassiske Chain-Ladder metoden. For resultater der avviket er stort, kanskje bare et par prosent forskjell, er det nødvendig å gå nærmere inn på datasettet man jobber med, og undersøke hva uteliggerne egentlig skyldes, og hvor sannsynlig det er at slike ekstreme observasjoner vil oppstå igjen.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/57612011-06-01T00:00:00ZIntervall mellom fødslar studert ved "Phase Type Distributions"
http://hdl.handle.net/1956/5759
Intervall mellom fødslar studert ved "Phase Type Distributions"
Asperheim, Leiv Magne
Master thesis
Omhandlar tida mellom første og andre barn studert ved "Phase Type Distributions". Ser på desse tidene som overlevelsestider og prøver å tilpasse ein phase type fordeling til datasettet me ser på. Brukar også glatting av Nelson-Aalen estimatoren for å finne ein fordeling for overlevelsestidene. Utifra fordelinga me finn studerer me to forskjellige datasett som er samla inn på ulike tidspunkt. Dette gjev oss informasjon om det er nokon endringar i fødselsmønsteret i løpet av den tida som er gått mellom datasetta.
Tue, 31 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/57592011-05-31T00:00:00ZStatistiske modellar for alder-periode-kohort-effektar i epidemiologi
http://hdl.handle.net/1956/5752
Statistiske modellar for alder-periode-kohort-effektar i epidemiologi
Forland, Gunhild
Master thesis
Denne oppgåva ser på teorien for alder-periode-kohort-modellar og korleis desse kan implementerast i statistikkprogrammet R. Det er òg sett på korleis splinefunksjonar kan brukast i alder-periode-kohort-modellar og korleis desse modellane er brukte i praksis i kreftepidemiologi.
Fri, 20 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/57522011-05-20T00:00:00ZOpen-source MATLAB implementation of consistent discretisations on complex grids
http://hdl.handle.net/1956/5745
Open-source MATLAB implementation of consistent discretisations on complex grids
Lie, Knut-Andreas; Krogstad, Stein; Ligaarden, Ingeborg Skjelkvåle; Natvig, Jostein Roald; Nilsen, Halvor Møll; Skaflestad, Bård
Peer reviewed; Journal article
Accurate geological modelling of features
such as faults, fractures or erosion requires grids that
are flexible with respect to geometry. Such grids generally
contain polyhedral cells and complex grid-cell connectivities.
The grid representation for polyhedral grids
in turn affects the efficient implementation of numerical
methods for subsurface flow simulations. It is well
known that conventional two-point flux-approximation
methods are only consistent for K-orthogonal grids and
will, therefore, not converge in the general case. In
recent years, there has been significant research into
consistent and convergent methods, including mixed,
multipoint and mimetic discretisation methods. Likewise,
the so-called multiscale methods based upon hierarchically
coarsened grids have received a lot of
attention. The paper does not propose novel mathematical
methods but instead presents an open-source
Matlab® toolkit that can be used as an efficient test
platform for (new) discretisation and solution methods
in reservoir simulation. The aim of the toolkit is to
support reproducible research and simplify the development,
verification and validation and testing and
comparison of new discretisation and solution methods
on general unstructured grids, including in particular
corner point and 2.5D PEBI grids. The toolkit consists
of a set of data structures and routines for creating, manipulating and visualising petrophysical data, fluid
models and (unstructured) grids, including support for
industry standard input formats, as well as routines
for computing single and multiphase (incompressible)
flow.We review key features of the toolkit and discuss a
generic mimetic formulation that includes many known
discretisation methods, including both the standard
two-point method as well as consistent and convergent
multipoint and mimetic methods. Apart from the core
routines and data structures, the toolkit contains addon
modules that implement more advanced solvers and
functionality. Herein, we show examples of multiscale
methods and adjoint methods for use in optimisation of
rates and placement of wells.
Tue, 02 Aug 2011 00:00:00 GMThttp://hdl.handle.net/1956/57452011-08-02T00:00:00ZSampling error distribution for the ensemble Kalman filter update step
http://hdl.handle.net/1956/5744
Sampling error distribution for the ensemble Kalman filter update step
Kovalenko, Andrey; Mannseth, Trond; Nævdal, Geir
Peer reviewed; Journal article
In recent years, data assimilation techniques
have been applied to an increasingly wider specter
of problems. Monte Carlo variants of the Kalman
filter, in particular, the ensemble Kalman filter (EnKF),
have gained significant popularity. EnKF is used for
a wide variety of applications, among them for updating
reservoir simulation models. EnKF is a Monte
Carlo method, and its reliability depends on the actual
size of the sample. In applications, a moderately sized
sample (40–100 members) is used for computational
convenience. Problems due to the resulting Monte
Carlo effects require a more thorough analysis of the
EnKF. Earlier we presented a method for the assessment
of the error emerging at the EnKF update step
(Kovalenko et al., SIAM J Matrix Anal Appl, in press).
A particular energy norm of the EnKF error after
a single update step was studied. The energy norm
used to assess the error is hard to interpret. In this
paper, we derive the distribution of the Euclidean norm
of the sampling error under the same assumptions as
before, namely normality of the forecast distribution and negligibility of the observation error. The distribution
depends on the ensemble size, the number and
spatial arrangement of the observations, and the prior
covariance. The distribution is used to study the error
propagation in a single update step on several synthetic
examples. The examples illustrate the changes in reliability
of the EnKF, when the parameters governing the
error distribution vary.
Wed, 06 Jul 2011 00:00:00 GMThttp://hdl.handle.net/1956/57442011-07-06T00:00:00ZOn flow properties of surface waves described by model equations of Boussinesq type
http://hdl.handle.net/1956/5729
On flow properties of surface waves described by model equations of Boussinesq type
Ali, Alfatih Mohammed A.
Doctoral thesis
Fri, 23 Mar 2012 00:00:00 GMThttp://hdl.handle.net/1956/57292012-03-23T00:00:00ZQuality of the Analysis Step in EnKF
http://hdl.handle.net/1956/5611
Quality of the Analysis Step in EnKF
Bergo, Brede Rem
Master thesis
In many physical applications we want to characterize the parameters of a system based on indirect observations or measurements.
In a reservoir simulator setting, the goal is to simulate the production of hydrocarbons from the reservoir. This way we can try out different production strategies and optimize the production plan before the reservoir is put on production. These decisions depend on good simulations of the flow of oil, gas and water in the porous rocks.
To achieve appropriate flow calculations, a good estimate of the flow properties of the rock is needed. The process of building an approximation to the reservoir itself and its properties is called reservoir modeling or reservoir characterization. For this, prior information is used, like well logs, analyzed core plugs from the appraisal wells and seismic data. This information gives us some estimate of our poorly known reservoir parameters, like the porosity and permeability fields.The performance of the reservoir, given a recovery strategy, can be predicted by a reservoir simulator. After the field is put on production one may use the production data to improve the reservoir model. The basic idea is that predicted performance should match the observed performance. By tuning the parameters in the model, one tries to fit the output of the simulator to the production history. This is referred to as history matching, which is a nonlinear inverse problem.
A promising method to automatically perform the history matching is the Ensemble Kalman Filter. EnKF is a sequential data assimilation algorithm using Monte Carlo techniques where measurements and prior information about the system is combined to make the best weighted estimate based on their uncertainties. After the assimilation, the model is run forward in time using the reservoir simulator. When new observations or data are available, the next analysis step will incorporate the new observations to produce a new analyzed estimate.
A large number of data assimilated at the same time has proved to be a difficult challenge for EnKF. This could correspond to the use of e.g. 4D seismic data. One computational advantage is that the covariance matrix of the system is never explicitly calculated, but rather approximated from the ensemble itself. However, spurious correlations in the ensemble sample covariance matrix is one problem to be addressed. In particular, properties in cells far away from the location of measurements are affected in too great scale. EnKF is based on the Kalman Filter, which is a recursive filter for linear problems.
In this master thesis we consider the quality of the analysis step of the EnKF. Our main focus is the sampling errors caused by the approximated sample covariance matrix when a increasing number of measurements are assimilated.The work here is inspired by [KovalenkoSamplingError2011, KovalenkoECMOR], where a probabilistic measure for the sampling error is derived under the assumptions of a normally distributed prior and negligible measurement errors.Here we try a somewhat different approach using approximate calculations and Neumann series to asses the sampling error. We consider measurement errors of varying size.
Wed, 15 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/56112011-06-15T00:00:00ZKarakterisering av strømning gjennom en forkastning med fault facies- og standardmodell ved bruk av reservoarsimulator
http://hdl.handle.net/1956/5597
Karakterisering av strømning gjennom en forkastning med fault facies- og standardmodell ved bruk av reservoarsimulator
Carstensen, Carl-Martin
Master thesis
Forkastninger har en betydelig innvirkning på strømninger i et reservoar. Hittil har forkastninger blitt modellert som en flate eller et plan i vanlige reservoarsimulatorer, men studier har vist at forkastninger ofte må regnes for å være volumer og ikke flater. En fault facies-modell tar høyde for dette ved å modellere forkastningene som volumer med facies som inneholder forkastningsegenskapene, men numerisk løsning av denne modellen er svært tidkrevende og i enkelte tilfeller umulig.
I dette studiet har forkastningsstrømmene blitt studert i reservoarsimulatoren ECLIPSE. To modeller har blitt modellert, en standardmodell hvor forkastningen er modellert som en flate, og en fault facies-modell der forkastningen er modellert som et volum.
Formålet med studiet var å undersøke hvordan forkastningsstrømmene var, om standardmodellen simulerte forkastningsstrømmene tilstrekkelig nøyaktig og om resultatene fra fault facies-modellen kunne reproduseres i en ekvivalent standardmodell (utvidet standardmodell).
Det ble gjort mange ulike kjøringer med forskjellige forkastningsegenskaper. Barrierer ble implementert i både reservoaret og forkastningen. Resultatene fra studiet indikerte at standardmodellen ikke var tilstrekkelig nøyaktig. Det lyktes å gjenskape metningsbildet fra fault facies-modellen på en utvidet standardmodell, men trykkbildet var forskjellig mellom modellene. Det ble derfor konkludert med at fault facies-modellen bør anvendes når forkastningsmodeller skal simuleres.
Mon, 21 Nov 2011 00:00:00 GMThttp://hdl.handle.net/1956/55972011-11-21T00:00:00ZOribatid mites in a changing world
http://hdl.handle.net/1956/5561
Oribatid mites in a changing world
De la Riva Caballero, Arguitxu
Doctoral thesis
The main scope of this thesis is to
illustrate the validity of oribatid mites as
tools for palaeoecological
reconstructions. Palaeoecology studies
the responses of past organisms to past
environmental changes. This can be
accomplished through the use of
biological proxies, which are indicators
of past conditions. The search for
additional means of distinguishing
climate change has only recently led to
the use of other commonly found
biological proxies such as tiny oribatid
mites known as moss-mites. Oribatid
mites are among the most numerous
biological remains in anoxic sediments,
yet until now oribatids have not been
widely used due to the uncertainties
about their present distribution and the
lack of expertise to identify them to
species level. This thesis contains four
papers which provide evidence about
how oribatid mites, when they are
properly identified to species level and
their background distribution is
adequately known, can give useful
additional and supporting information
for reconstructing past habitat and
environmental conditions.
Paper I studied oribatid
preferences and ecology in different
habitats, mainly forested, in western
Norway. One hundred and ninety two
species were found of which 64 were new records for Norway. The species
Chamobates borealis, Oppiella nova,
Moritzoppia neerlandica, and
Rhinoppia subpectinata characterised
the oribatid communities of Betula,
mixed, and Picea forest subsets.
Deciduous forest oribatid communities
were characterised by Achipteria
coleoptrata, Acrotritria ardua,
Ceratozetes gracilis, and Oribatella
calcarata. Hemileius initialis,
Nanhermannia dorsalis, C. borealis,
Tectocepheus velatus, and Atropacarus
striculus characterised wet habitats. In
water-logged habitats, Limnozetes
ciliatus, Mucronothrus nasalis, and
Trimalaconothrus glaber dominated.
Carabodes labyrinthicus, C.
marginatus, Melanozetes mollicomus,
and T. velatus characterised the oribatid
community of the lichen and moss
subset. The tree-line ecotone was
dominated by the euryceous species H.
initialis, T. velatus, and Oribatula
tibialis. This study represents a
thorough survey of oribatid
communities in western Norway, and
the insights it gives are an important
tool for habitat reconstructions, as they
provide the background knowledge
about modern oribatid fauna needed to
identify the type of past plant
community and past environments
represented in Quaternary sediments.
Paper II studied the oribatid
communities at the tree-line in western
Norway and compared them with the oribatid fossil assemblages found in
Lake Trettetjørn. The modern oribatid
assemblage provided a guide to the
reliability of the fossil assemblages to
reconstruct ecological and
environmental changes and, in addition,
to find the most favourable coring point
within the small lake. Results showed
that the core retrieved from the middle
of Lake Trettetjørn basin represented
the oribatid fauna from the catchment
area. Aquatic oribatids were the best
group represented in the lake sediments,
followed by oribatids from the habitats
adjacent to the lake. This constitutes
good evidence that oribatids are
excellent indicators of local habitats.
Comparison of the oribatid fauna found
in the lake traps with the oribatid
assemblages from paper III illustrated
the importance of identifying the mites
to species level, as this increased the
ecological indicator value and,
therefore, the reliability of the
palaeoreconstructions.
In Paper III, sub-fossil oribatid
mites, pollen, plant macrofossils, and
diatoms from a lake sediment core from
western Norway were studied. This
multi-proxy study attempted to
reconstruct tree-line fluctuations and
their impact on Lake Trettetjørn’s
environment. Evidence from pollen,
plant macrofossils, and oribatids
complemented and corroborated each
other in the reconstruction of the
vegetational development. A semi-open
grassland developed into forest. Mires began to replace forested areas on
the landscape as a more oceanic climate
began to prevail. All proxies indicated
increasingly intensive human land-use
as the Upsete settlement grew to
accommodate the construction of the
Bergen-Oslo railway.
In Paper IV, oribatid mites and
pollen were used to reconstruct the local
habitat at an archaeological excavation.
The study aimed to identify the start of
cereal cultivation at Kvitevoll farm, on
Halsnøy island, western Norway. The
high number of oribatid remains
identified to species level and the close
match to pollen stratigraphy led to a
detailed palaeoenvironmental
reconstruction. Oribatids and pollen
indicated the development of a moist
forest followed by vegetation openings
and mire expansion over the site. At the
top of the sequence, the presence of
oribatids such as Tectocepheus velatus
and the increase in members of the
family Oppiidae indicated a higher
degree of disturbance, probably from
grazing. Pollen of Cerealia indicated the
start of cultivation around the same
time.
Fri, 09 Dec 2011 00:00:00 GMThttp://hdl.handle.net/1956/55612011-12-09T00:00:00ZEarly prediction of response to radiotherapy and androgen-deprivation therapy in prostate cancer by repeated functional MRI: a preclinical study
http://hdl.handle.net/1956/5549
Early prediction of response to radiotherapy and androgen-deprivation therapy in prostate cancer by repeated functional MRI: a preclinical study
Røe, Kathrine; Kakar, Manish; Seierstad, Therese; Ree, Anne H.; Olsen, Dag R.
Peer reviewed; Journal article
Background: In modern cancer medicine, morphological magnetic resonance imaging (MRI) is routinely used in
diagnostics, treatment planning and assessment of therapeutic efficacy. During the past decade, functional imaging
techniques like diffusion-weighted (DW) MRI and dynamic contrast-enhanced (DCE) MRI have increasingly been
included into imaging protocols, allowing extraction of intratumoral information of underlying vascular, molecular
and physiological mechanisms, not available in morphological images. Separately, pre-treatment and early changes
in functional parameters obtained from DWMRI and DCEMRI have shown potential in predicting therapy response.
We hypothesized that the combination of several functional parameters increased the predictive power.
Methods: We challenged this hypothesis by using an artificial neural network (ANN) approach, exploiting nonlinear
relationships between individual variables, which is particularly suitable in treatment response prediction involving
complex cancer data. A clinical scenario was elicited by using 32 mice with human prostate carcinoma xenografts
receiving combinations of androgen-deprivation therapy and/or radiotherapy. Pre-radiation and on days 1 and 9
following radiation three repeated DWMRI and DCEMRI acquisitions enabled derivation of the apparent diffusion
coefficient (ADC) and the vascular biomarker Ktrans, which together with tumor volumes and the established
biomarker prostate-specific antigen (PSA), were used as inputs to a back propagation neural network,
independently and combined, in order to explore their feasibility of predicting individual treatment response
measured as 30 days post-RT tumor volumes.
Results: ADC, volumes and PSA as inputs to the model revealed a correlation coefficient of 0.54 (p < 0.001)
between predicted and measured treatment response, while Ktrans, volumes and PSA gave a correlation coefficient
of 0.66 (p < 0.001). The combination of all parameters (ADC, Ktrans, volumes, PSA) successfully predicted treatment
response with a correlation coefficient of 0.85 (p < 0.001).
Conclusions: We have in a preclinical investigation showed that the combination of early changes in several
functional MRI parameters provides additional information about therapy response. If such an approach could be
clinically validated, it may become a tool to help identifying non-responding patients early in treatment, allowing
these patients to be considered for alternative treatment strategies, and, thus, providing a contribution to the
development of individualized cancer therapy.
Wed, 08 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/55492011-06-08T00:00:00ZConvective mixing in geological carbon storage
http://hdl.handle.net/1956/5540
Convective mixing in geological carbon storage
Elenius, Maria
Doctoral thesis
The industrial era has seen an exponential growth in the atmospheric concentration
of carbon dioxide (CO2), resulting mainly from the burning of fossil fuels. This
can cause changes in the climate that have severe impacts on freshwater and food
supply, ecosystems and society. One of the most viable options to reduce CO2
emissions is to store it in geological formations, in particular in saline aquifers.
In this option, the carbon is again stored in the subsurface, from which it was
extracted. The first geological storage project was initiated in Norway, in 1996,
and CO2 has long before been injected to geological formations to enhance oil
recovery. Storage occurs with CO2 in a so-called supercritical state and this fluid
is buoyant in the formation. Four physical mechanisms help trapping the carbon
in the formation: the CO2 plume accumulates under a low-permeability caprock;
CO2 is trapped as disconnected drops in small pores; buoyancy is lost when CO2
dissolves into the water; and on longer time-scales chemical reactions incorporate
the carbon in minerals. Dissolution trapping is largely determined by convective
mixing, which is a rich problem that was first investigated almost 100 years ago.
We investigate the influence of convective mixing on dissolution trapping in geological
storage of CO2. Most formations that can be used for CO2 storage are slightly tilted. We show
with numerical simulations that dissolution trapping must in general be acknowledged
when questions about the final migration distance and time of the CO2
plume are to be answered. The saturations in the plume correspond well to transition
zones consistent with capillary equilibrium. The results also show that the
capillary transition zone, in which both the supercritical CO2 plume and the water
phase exist and are mobile, participates in the convective mixing. Using linear
stability analysis complemented with numerical simulations, we show that the
interaction between convective mixing and the capillary transition zone leads to
considerably larger dissolution rates and a reduced onset time for enhanced convective
mixing compared to when this interaction is neglected. The selection of
the wavelength that first becomes unstable remains almost unchanged by the interaction.
A statistical investigation of the onset time of enhanced convective mixing
under the neglection of the capillary transition zone reveals that it is notably
larger than the onset time of instability for three example formations. However,
comparison of these simulation results with the investigations in a sloping aquifer
preliminarily suggest that the distance that the plume propagates during the onset
time of enhanced convective mixing is negligible and that therefore this time
can be assumed to be zero, with the possible exception of aquifers that have steep
slopes.
Fri, 25 Nov 2011 00:00:00 GMThttp://hdl.handle.net/1956/55402011-11-25T00:00:00ZCo2 trapping in sloping aquafiers: High resolution numerical simulations
http://hdl.handle.net/1956/5539
Co2 trapping in sloping aquafiers: High resolution numerical simulations
Elenius, Maria; Tchelep, Hamdi A.; Johannsen, Klaus
Conference object
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/55392010-01-01T00:00:00ZLiposomal doxorubicin improves radiotherapy response in hypoxic prostate cancer xenografts
http://hdl.handle.net/1956/5481
Liposomal doxorubicin improves radiotherapy response in hypoxic prostate cancer xenografts
Hagtvet, Eirik; Røe, Kathrine; Olsen, Dag R.
Peer reviewed; Journal article
Background: Tumor vasculature frequently fails to supply sufficient levels of oxygen to tumor tissue resulting in
radioresistant hypoxic tumors. To improve therapeutic outcome radiotherapy (RT) may be combined with cytotoxic
agents.
Methods: In this study we have investigated the combination of RT with the cytotoxic agent doxorubicin (DXR)
encapsulated in pegylated liposomes (PL-DXR). The PL-DXR formulation Caelyx® was administered to male mice
bearing human, androgen-sensitive CWR22 prostate carcinoma xenografts in a dose of 3.5 mg DXR/kg, in
combination with RT (2 Gy/day × 5 days) performed under normoxic and hypoxic conditions. Hypoxic RT was
achieved by experimentally inducing tumor hypoxia by clamping the tumor-bearing leg five minutes prior to and
during RT. Treatment response evaluation consisted of tumor volume measurements and dynamic contrastenhanced
magnetic resonance imaging (DCE MRI) with subsequent pharmacokinetic analysis using the Brix model.
Imaging was performed pre-treatment (baseline) and 8 days later. Further, hypoxic fractions were determined by
pimonidazole immunohistochemistry of excised tumor tissue.
Results: As expected, the therapeutic effect of RT was significantly less effective under hypoxic than normoxic
conditions. However, concomitant administration of PL-DXR significantly improved the therapeutic outcome
following RT in hypoxic tumors. Further, the pharmacokinetic DCE MRI parameters and hypoxic fractions suggest
PL-DXR to induce growth-inhibitory effects without interfering with tumor vascular functions.
Conclusions: We found that DXR encapsulated in liposomes improved the therapeutic effect of RT under hypoxic
conditions without affecting vascular functions. Thus, we propose that for cytotoxic agents affecting tumor vascular
functions liposomes may be a promising drug delivery technology for use in chemoradiotherapy.
Fri, 07 Oct 2011 00:00:00 GMThttp://hdl.handle.net/1956/54812011-10-07T00:00:00ZGenotyping errors in a calibrated DNA register: implications for identification of individuals
http://hdl.handle.net/1956/5480
Genotyping errors in a calibrated DNA register: implications for identification of individuals
Haaland, Øystein; Glover, Kevin; Seliussen, Bjørghild Breistein; Skaug, Hans J.
Peer reviewed; Journal article
Background: The use of DNA methods for the identification and management of natural resources is gaining
importance. In the future, it is likely that DNA registers will play an increasing role in this development.
Microsatellite markers have been the primary tool in ecological, medical and forensic genetics for the past two
decades. However, these markers are characterized by genotyping errors, and display challenges with calibration
between laboratories and genotyping platforms. The Norwegian minke whale DNA register (NMDR) contains
individual genetic profiles at ten microsatellite loci for 6737 individuals captured in the period 1997-2008. These
analyses have been conducted in four separate laboratories for nearly a decade, and offer a unique opportunity to
examine genotyping errors and their consequences in an individual based DNA register. We re-genotyped 240
samples, and, for the first time, applied a mixed regression model to look at potentially confounding effects on
genotyping errors.
Results: The average genotyping error rate for the whole dataset was 0.013 per locus and 0.008 per allele. Errors
were, however, not evenly distributed. A decreasing trend across time was apparent, along with a strong withinsample
correlation, suggesting that error rates heavily depend on sample quality. In addition, some loci were more
error prone than others. False allele size constituted 18 of 31 observed errors, and the remaining errors were ten
false homozygotes (i.e., the true genotype was a heterozygote) and three false heterozygotes (i.e., the true
genotype was a homozygote).
Conclusions: To our knowledge, this study represents the first investigation of genotyping error rates in a wildlife
DNA register, and the first application of mixed models to examine multiple effects of different factors influencing
the genotyping quality. It was demonstrated that DNA registers accumulating data over time have the ability to
maintain calibration and genotyping consistency, despite analyses being conducted on different genotyping
platforms and in different laboratories. Although errors were detected, it is demonstrated that if the re-genotyping
of individual samples is possible, these will have a minimal effect on the database’s primary purpose, i.e., to
perform individual identification.
Wed, 20 Apr 2011 00:00:00 GMThttp://hdl.handle.net/1956/54802011-04-20T00:00:00ZLie–Butcher series and geometric numerical integration on manifolds
http://hdl.handle.net/1956/5436
Lie–Butcher series and geometric numerical integration on manifolds
Lundervold, Alexander
Doctoral thesis
The thesis belongs to the field of “geometric numerical integration” (GNI), whose
aim it is to construct and study numerical integration methods for differential equations
that preserve some geometric structure of the underlying system. Many systems
have conserved quantities, e.g. the energy in a conservative mechanical system
or the symplectic structures of Hamiltonian systems, and numerical methods
that take this into account are often superior to those constructed with the more
classical goal of achieving high order.
An important tool in the study of numerical methods is the Butcher series (Bseries)
invented by John Butcher in the 1960s. These are formal series expansions
indexed by rooted trees and have been used extensively for order theory and the
study of structure preservation. The thesis puts particular emphasis on B-series
and their generalization to methods for equations evolving on manifolds, called
Lie–Butcher series (LB-series).
It has become apparent that algebra and combinatorics can bring a lot of insight
into this study. Many of the methods and concepts are inherently algebraic or
combinatoric, and the tools developed in these fields can often be used to great
effect. Several examples of this will be discussed throughout. The thesis is structured as follows: background material on geometric numerical
integration is collected in Part I. It consists of several chapters: in Chapter 1 we
look at some of the main ideas of geometric numerical integration. The emphasis
is put on B-series, and the analysis of these. Chapter 2 is devoted to differential
equations evolving on manifolds, and the series corresponding to B-series in this
setting. Chapter 3 consists of short summaries of the papers included in Part II.
Part II is the main scientific contribution of the thesis, consisting of reproductions
of three papers on material related to geometric numerical integration.
Fri, 04 Nov 2011 00:00:00 GMThttp://hdl.handle.net/1956/54362011-11-04T00:00:00ZScroll codes over curves of higher genus
http://hdl.handle.net/1956/5336
Scroll codes over curves of higher genus
Johnsen, Trygve; Rasmussen, Nils Henry
Journal article; Peer reviewed
We construct linear codes from scrolls over curves of high genus and study the higher
support weights di of these codes. We embed the scroll into projective space Pk-1 and
calculate bounds for the di by considering the maximal number of Fq-rational points that are
contained in a codimension h subspace of Pk-1. We nd lower bounds of the di and for the
cases of large i calculate the exact values of the di.
This work follows the natural generalisation of Goppa codes to higher-dimensional varieties
as studied by S.H. Hansen, C. Lomont and T. Nakashima.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/53362010-01-01T00:00:00ZNumerical Simulation Studies of the Long-term Evolution of a CO2 Plume in a Saline Aquifer with a Sloping Caprock
http://hdl.handle.net/1956/5230
Numerical Simulation Studies of the Long-term Evolution of a CO2 Plume in a Saline Aquifer with a Sloping Caprock
Pruess, Karsten; Nordbotten, Jan Martin
Peer reviewed; Journal article
We have used the TOUGH2-MP/ECO2N code to perform numerical simulation
studies of the long-term behavior of CO2 stored in an aquifer with a sloping caprock. This
problem is of great practical interest, and is very challenging due to the importance of multiscale
processes.We find that the mechanism of plume advance is different from what is seen
in a forced immiscible displacement, such as gas injection into a water-saturated medium.
Instead of pushing the water forward, the plume advances because the vertical pressure gradients
within the plume are smaller than hydrostatic, causing the groundwater column to
collapse ahead of the plume tip. Increased resistance to vertical flow of aqueous phase in
anisotropic media leads to reduced speed of up-dip plume advancement. Vertical equilibrium
models that ignore effects of vertical flow will overpredict the speed of plume advancement.
The CO2 plume becomes thinner as it advances, but the speed of advancement remains constant
over the entire simulation period of up to 400 years, with migration distances of more
than 80 km. Our simulations include dissolution of CO2 into the aqueous phase and associated
density increase, and molecular diffusion. However, no convection develops in the
aqueous phase because it is suppressed by the relatively coarse (sub-) horizontal gridding
required in a regional-scale model. A first crude sub-grid-scale model was developed to represent
convective enhancement of CO2 dissolution. This process is found to greatly reduce
the thickness of the CO2 plume, but, for the parameters used in our simulations, does not
affect the speed of plume advancement.
Wed, 02 Feb 2011 00:00:00 GMThttp://hdl.handle.net/1956/52302011-02-02T00:00:00ZNew examples of curves with a one-dimensional family of pencils of minimal degree
http://hdl.handle.net/1956/5219
New examples of curves with a one-dimensional family of pencils of minimal degree
Rasmussen, Nils Henry
Peer reviewed; Journal article
We give a geometric construction of sub-linear systems on a K3
surface consisting of smooth curves C with infinitely many g1
gon(C)’s.
Thu, 14 Jul 2011 00:00:00 GMThttp://hdl.handle.net/1956/52192011-07-14T00:00:00ZOn Semi-implicit Splitting Schemes for the Beltrami Color Image Filtering
http://hdl.handle.net/1956/5210
On Semi-implicit Splitting Schemes for the Beltrami Color Image Filtering
Rosman, Guy; Dascal, Lorina; Tai, Xue-Cheng; Kimmel, Ron
Peer reviewed; Journal article
The Beltrami flow is an efficient nonlinear filter,
that was shown to be effective for color image processing.
The corresponding anisotropic diffusion operator strongly
couples the spectral components. Usually, this flow is implemented
by explicit schemes, that are stable only for
very small time steps and therefore require many iterations.
In this paper we introduce a semi-implicit Crank-Nicolson
scheme based on locally one-dimensional (LOD)/additive
operator splitting (AOS) for implementing the anisotropic
Beltrami operator. The mixed spatial derivatives are treated
explicitly, while the non-mixed derivatives are approximated in an implicit manner. In case of constant coefficients, the
LOD splitting scheme is proven to be unconditionally stable.
Numerical experiments indicate that the proposed scheme
is also stable in more general settings. Stability, accuracy,
and efficiency of the splitting schemes are tested in applications
such as the Beltrami-based scale-space, Beltrami denoising
and Beltrami deblurring. In order to further accelerate
the convergence of the numerical scheme, the reduced
rank extrapolation (RRE) vector extrapolation technique is
employed.
Sat, 15 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/52102011-01-15T00:00:00ZNumerical modelling of organic waste dispersion from fjord located fish farms
http://hdl.handle.net/1956/5209
Numerical modelling of organic waste dispersion from fjord located fish farms
Ali, Alfatih Mohammed A.; Thiem, Øyvind; Berntsen, Jarle
Peer reviewed; Journal article
In this study, a three-dimensional particle
tracking model coupled to a terrain following ocean
model is used to investigate the dispersion and the deposition
of fish farm particulate matter (uneaten food
and fish faeces) on the seabed due to tidal currents. The
particle tracking model uses the computed local flow
field for advection of the particles and random movement
to simulate the turbulent diffusion. Each particle
is given a settling velocity which may be drawn from
a probability distribution according to settling velocity
measurements of faecal and feed pellets. The results
show that the maximum concentration of organic waste
for fast sinking particles is found under the fish cage
and continue monotonically decreasing away from the
cage area. The maximum can split into two maximum
peaks located at both sides of the centre of the fish cage
area in the current direction. This process depends on
the sinking time (time needed for a particle to settle at
the bottom), the tidal velocity and the fish cage size. If the sinking time is close to a multiple of the tidal
period, the maximum concentration point will be under
the fish cage irrespective of the tide strength. This is
due to the nature of the tidal current first propagating
the particles away and then bringing them back when
the tide reverses. Increasing the cage size increases the
likelihood for a maximum waste accumulation beneath
the fish farm, and larger farms usually means larger biomasses
which can make the local pollution even more
severe. The model is validated by using an analytical
model which uses an exact harmonic representation
of the tidal current, and the results show an excellent
agreement. This study shows that the coupled ocean
and particle model can be used in more realistic applications
to help estimating the local environmental
impact due to fish farms.
Thu, 21 Apr 2011 00:00:00 GMThttp://hdl.handle.net/1956/52092011-04-21T00:00:00ZMultiscale mass conservative domain decomposition preconditioners for elliptic problems on irregular grids
http://hdl.handle.net/1956/5207
Multiscale mass conservative domain decomposition preconditioners for elliptic problems on irregular grids
Sandvin, Andreas; Nordbotten, Jan Martin; Aavatsmark, Ivar
Journal article; Peer reviewed
Multiscale methods can in many cases be
viewed as special types of domain decomposition preconditioners.
The localisation approximations introduced
within the multiscale framework are dependent
upon both the heterogeneity of the reservoir and the
structure of the computational grid. While previous
works on multiscale control volume methods have focused
on heterogeneous elliptic problems on regular
Cartesian grids, we have tested the multiscale control
volume formulations on two-dimensional elliptic problems
involving heterogeneous media and irregular grid
structures. Our study shows that the tangential flow
approximation commonly used within multiscale methods
is not suited for problems involving rough grids.
We present a more robust mass conservative domain
decomposition preconditioner for simulating flow in
heterogeneous porous media on general grids.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/52072011-06-01T00:00:00ZSequential Data Assimilation in High Dimensional Nonlinear Systems
http://hdl.handle.net/1956/5178
Sequential Data Assimilation in High Dimensional Nonlinear Systems
Stordal, Andreas Størksen
Doctoral thesis
Since its introduction in 1994, the ensemble Kalman filter (EnKF) has gained a lot
of attention as a tool for sequential data assimilation in many scientific areas. Due to
its computationally fast and easy to implement algorithm its popularity has increased
vastly over the last decades especially in the fields of oceanography and petroleum engineering.
Although EnKF has been successfully applied to many real world problems
it has a major drawback from a statistical point of view. The algorithm only converge to
the optimal solution if the system under consideration is linear and all random variables
describing the system are Gaussian.
There exist sequentialMonte Carlo methods (SMC) with correct asymptotic properties,
but both numerical and theoretical studies have shown that the number of samples
must increase exponentially with the dimension of the model to avoid a collapse of the
algorithm. For large scale geophysical systems, such as petroleum reservoirs or ocean
models, each sample requires the solution of a system of partial differential equations
on a large grid. The computational burden of solving these equations using numerical
schemes naturally puts an upper limit on the number of samples we can use in practice.
Hence these sequential Monte Carlo methods are not applicable, at least in their
simplest form, in large scale geophysical models.
This thesis explores the possibility of bridging the gap between EnKF and one of the
asymptotically correct SMC methods, known as particle filters, by extending already
known theory on Gaussian mixture filters. In addition a sensitivity analysis is carried
out for a new type of data in reservoir models.
A new approximative filter is developed by introducing an additional parameter in
the standard Gaussian mixture filter. The adaptive Gaussian mixture filter (AGM) consists
of two parameters and by choosing these differently the filter may run as EnKF, a
particle filter, or something in between. The method is tested on the Lorenz96 model for
comparison with EnKF and Gaussian mixture filters. Further comparison with EnKF is
made after running AGMon a 2D two-phase and a 3D three-phase petroleum reservoir.
We generalize AGMand compute error bounds and asymptotic properties using classical
approximation techniques before we explore the effects of estimating the first and
second order moments locally in Kalman type filters. By local estimation we mean local
in value and not in space. Two different approaches are suggested and applied to the
chaotic Lorenz63 model and a 1D reservoir model. Finally the sensitivity of reservoir
parameters to a new type of data, nanosensor observations, are calculated. Both analytical
and numerical results are provided. A simulation experiment is included from
which we can compute the resolution of the estimated parameters numerically.
Fri, 21 Oct 2011 00:00:00 GMThttp://hdl.handle.net/1956/51782011-10-21T00:00:00ZNumerical Methods for Conservation Laws with a Discontinuous Flux Function
http://hdl.handle.net/1956/5176
Numerical Methods for Conservation Laws with a Discontinuous Flux Function
Tveit, Svenn
Master thesis
When simulating two-phase flow in a porous medium, numerical methods are used to solve the equations of flow, called conservation laws. In the industry, this is done by a reservoir simulator, and the most widely used method is the Upstream Mobility scheme. It is useful to compare how this scheme solves the flow problem against academically accepted schemes, like Godunov's method and Engquist-Osher's method. To gain knowledge on the numerical approximations, the theory behind must be known, especially when dealing with spatial discontinuities. Only then will a comparision between numerical results be applicable for physical models. In this thesis we have investigated the theory of conservation laws with discontinuous flux functions, introduced a new scheme for this problem, Local Lax-Friedrichs, and compared the Upstream Mobility scheme against the Godunov, Engquist-Osher and Local Lax-Friedrichs scheme.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/51762011-06-01T00:00:00ZValue of different data types for estimation of permeability
http://hdl.handle.net/1956/5139
Value of different data types for estimation of permeability
Fossum, Kristian
Master thesis
The interrelation between sensitivity, non-linearity and scale, associated with the inverse problem of identifying permeability from measurements of fluid flow, is considered. The gouverning equations for fluid flow in a porous media is presented, and a measure of sensitivity and non-linearity is derived. By making some simplifications, this measure can be analyzed for different flow structures. The effects of the simplifications are then investigated separately. Finaly some numerical experiments are conducted. They highlight both the effects of the simplifications, and if the interrelation between sensitivity and scale carry over to a 2-D setting.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/51392011-06-01T00:00:00ZStatistiske modellar for Poissonregresjon med modifiserte null-sannsyn, ZIP og ZAP
http://hdl.handle.net/1956/5136
Statistiske modellar for Poissonregresjon med modifiserte null-sannsyn, ZIP og ZAP
Kårstad, Solveig
Master thesis
Oppgåva omhandlar modellane zero-inflated og zero-
altered Poissonfordeling, vanlegvis forkorta som
ZIP og ZAP. Dette er to regresjonsmodellar som
bruker Poisson som grunnleggjande fordeling, men
som tillet sannsynet for utfallet 0 å overstige
verdien ordinær Poissonfordeling gir denne
storleiken. Dei to modellane vart første gang
presenstert i si opprinnelige form i ein artikkel
av John Mullahy i 1986. Modellane har sidan vorte
meir og meir anvendt i praktisk regresjonsarbeid.
Særlig voks interessa for ZIP etter at Diane
Lambert i ein artikkel i 1992 vidareutvikla
Mullahy sin ZIP-modellen til ein meir generell og
anvendelig versjon. Oppgåva gir ein gjennomgang av
dei to artiklane. Det har etterkvart vorte utgitt
ein del artiklar som omhandlar ZIP og ZAP, men dei
fleste ser på modellane i forhold til Poisson og
ikkje i forhold til kvarandre. Dersom ZIP og ZAP
er samanlikna skjer dette som oftast ved at det
vert utført regresjon med begge modellane på same
datasett, og så samanliknar ein i etterkant AIC-
verdiane for å finne ut kven av dei to modellane
som best forklarar samanhengen mellom
responsvariabel og forklaringsvariablar. Det er
lite tilgjengeleg fagstoff som fortel korleis ein
på førehand kan vite kven av modellane ein bør
velje for analyse på eit gitt datasett. Det var
difor gjennom arbeidet med oppgåva forsøkt å finne
ut om val av modell faktisk har betydning for
påliteligheiten til estimeringa av
regresjonskoeffisientane, kva konsekvensen er ved
val av gal modell, og ikkje minst korleis me på
førehand kan vita kva som er den korrekte modellen
å bruke analysesituasjonen. Oppgåva innheld i
hovudsak ein teoretisk del og ein del med praktisk
analyse av ZIP og ZAP. I den første delen av
oppgåva, kapittel 1-5, får lesaren ei teoretisk
innføring i dei to modellane. Dette er naudsynt
kunnskap for å kunne forstå analysen av modellane
i den andre delen av oppgåva, kapittel 6-9.
Thu, 05 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/51362011-05-05T00:00:00ZLinear and Nonlinear Convection in Porous Media between Coaxial Cylinders
http://hdl.handle.net/1956/5123
Linear and Nonlinear Convection in Porous Media between Coaxial Cylinders
Bringedal, Carina
Master thesis
In this thesis we develop a mathematical model for describing three-dimensional natural convection in porous media filling a vertical annular cylinder. We apply a linear stability analysis to determine the onset of convection and the preferred convective mode when the annular cylinder is subject to two different types of boundary conditions: heat insulated sidewalls and heat conducting sidewalls. The case of an annular cylinder with insulated sidewalls has been investigated earlier, but our results reveal more details than previously found. We also investigate the case of the radius of the inner cylinder approaching zero and the results are compared with previous work for non-annular cylinders.
Using pseudospectral methods we have built a high-order numerical simulator to uncover the nonlinear regime of the convection cells. We study onset and geometry of convection modes, and look at the stability of the modes with respect to different types of perturbations. Also, we examine how variations in the Rayleigh number affects the convection modes and their stability regimes. We uncover an increased complexity regarding which modes that are obtained and we are able to identify stable secondary and mixed modes. We find the different convective modes to have overlapping stability regions depending on the Rayleigh number.
The motivation for studying natural convection in porous media is related to geothermal energy extraction and we attempt to determine the effect of convection cells in a geothermal heat reservoir. However, limitations in the simulator do not allow us to make any conclusions on this matter.
Mon, 02 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/51232011-05-02T00:00:00ZSome Regularity Results for Certain Weakly Quasiregular Mappings on the Heisenberg Group and Elliptic Equations
http://hdl.handle.net/1956/5122
Some Regularity Results for Certain Weakly Quasiregular Mappings on the Heisenberg Group and Elliptic Equations
Li, Qifan
Master thesis
The thesis is organized as follows. In chapter 1, we set up a higher integrability result for the horizontal part of certain weakly quasiregular maps on the Heisenberg group. Unlike the Euclidean case, the exponential of the integrability is not near the homogeneous dimension Q that is not analogous to the Euclidean setting. Chapter 2 is devoted to the study of self-improving regularity for certain subelliptic equations. The difficulty of this problem in the Carnot group is that the Whitney extension theorem and the main result in the Carnot group can be obtained only for fourth-order homogeneous subelliptic systems from the arguments in (J. L. Lewis, On the very weak solutions of certain elliptic systems, Comm. Part. Diff. Equ., 18 (1993), 1515-1537.). Since the p-sub-Laplace equation is a very special case of the nonlinear subelliptic equations we can establish a better result in this case via the arguments from (R. R. Coifman, P. L. Lions, Y. Meyer and S. Semmes, Compensated compactness and Hardy spaces, J. Math. Pures Appl., 72 (1993), 247-286.). Chapter 3 provides a discussion of selfimproving regularity for the degenerate elliptic equations in the Euclidean space. The main result of Chapter 3 extends a result of Lewis from (J. L. Lewis, On the very weak solutions of certain elliptic systems, Comm. Part. Diff. Equ., 18 (1993), 1515-1537.) to the degenerate elliptic systems. The proof relies on the weighted pointwise Sobolev inequality for higher order derivatives which is a useful tool in study of higher order degenerate elliptic systems.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/51222011-06-01T00:00:00ZSome demographic methods applied to urban and rural populations of Pakistan
http://hdl.handle.net/1956/5030
Some demographic methods applied to urban and rural populations of Pakistan
Faisal, Shahzad
Master thesis
Summary of Findings: In this thesis, first of all I have tried to describe what is demography and different ways to collect demographic data. Then, I have applied some of the demographic techniques to the population of Pakistan. Here are my findings: First of all, I have considered the infant mortality in Pakistan and applied the test of hypothesis along with 2 x 2 table to show that there is a difference of facilities/services given by the government to the urban and rural area's population and find out the results of z and chi-square tests with p-value. The results indicate that there is really a huge difference of policies between urban and rural areas of Pakistan and I have found the p-value 0.00001 which show our hypothesis is highly significant. I have noticed that since only the 35% of the population is residing in the urban areas but still urban areas are under consideration all the times while the rest 65% areas having the less attention by the government institutions. Secondly by using the data given by Federal Bureau of Statistics, Pakistan I have set up different life tables for the total population, urban and rural population and for the male and females population of Pakistan. The results show that the life expectancy at birth in urban area (68.7 years) is 6% higher than the rural areas (64.3 years). Similarly, the probability of dying at the first age interval is also 10% smaller in the urban area then the rural one (i.e. 0.06444 & 0.07197 for urban and rural respectively). Moreover, the female life expectancy at birth (68.4 years) is found to be 7% higher than the male life expectancy (64.3 years). Third, I have applied decomposition technique introduced Kitawaga (1955) to see how much of the difference between death rates in urban and rural population is attributable to differences in their age distributions. The results shows that the original difference between the urban and rural population is -0.00210 (by equation 7.2) while the contribution of age compositional differences and contributions of age specific rate differences are -0.00052807 & -0.00157492 respectively (by equation 7.3). Further, the proportion of difference attributable to differences in age composition is found to be 25% whereas the proportion of difference attributable to differences in rate schedules is 75% which shows that both parts are contributing in the same direction to the difference. Lastly I have tried to make a population forecasting for Pakistan. For this, a few methods has been discussed and have made a forecast by using the compound rate of growth method and cohort component method. According to the first method, it shows that there might be 294.96 million population in the year 2032 (equation 8.13) whereas the second method states that it might be 258.09 million in the year 2031 (equation 8.14). It seems reasonable to say that the estimates found by the cohort component method are more reliable than the any other method as the cohort component is now the only method on which demographers are relying much.
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/50302011-06-01T00:00:00ZReconstructing Open Surfaces via Graph-Cuts
http://hdl.handle.net/1956/5024
Reconstructing Open Surfaces via Graph-Cuts
Wan, Min; Wang, Yu; Bae, Egil; Tai, Xue-Cheng; Wang, Desheng
Preprint
A novel graph-cuts-basedmethod is proposed
for reconstructing open surfaces from unordered point
sets. Through a boolean operation on the crust around
the data set, the open surface problem is translated
to a watertight surface problem within a restricted region.
Integrating the variationalmodel, Delaunay-based
tetrahedralmesh framework and multi-phase technique,
the proposed method can reconstruct open surfaces robustly
and effectively. Furthermore, a surface reconstruction
method with domain decomposition is presented,
which is based on the new open surface reconstruction
method. This method can handle more general
surfaces, such as non-orientable surfaces. The algorithm
is designed in a parallel-friendly way and necessary
measures are taken to eliminate cracks at the interface
between the subdomains. Numerical examples
are included to demonstrate the robustness and effectiveness
of the proposed method on watertight, open
orientable, open non-orientable surfaces and combinations
of such.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/50242011-01-01T00:00:00ZA Continuous Max-Flow Approach to Minimal Partitions with Label Cost Prior
http://hdl.handle.net/1956/5023
A Continuous Max-Flow Approach to Minimal Partitions with Label Cost Prior
Yuan, Jing; Bae, Egil; Boykov, Yuri; Tai, Xue-Cheng
Chapter; Peer reviewed
This paper investigates a convex relaxation approach for
minimum description length (MDL) based image partitioning or labeling,
which proposes an energy functional regularized by the spatial smoothness
prior joint with a penalty for the total number of appearences or
labels, the so-called label cost prior. As common in recent studies of convex
relaxation approaches, the total-variation term is applied to encode
the spatial regularity of partition boundaries and the auxiliary label cost
term is penalized by the sum of convex infinity norms of the labeling
functions. We study the proposed such convex MDL based image partition
model under a novel continuous flow maximization perspective,
where we show that the label cost prior amounts to a relaxation of the
flow conservation condition which is crucial to study the classical duality
of max-flow and min-cut! To the best of our knowledge, it is new
to demonstrate such connections between the relaxation of flow conservation
and the penalty of the total number of active appearences. In
addition, we show that the proposed continuous max-flow formulation
also leads to a fast and reliable max-flow based algorithm to address
the challenging convex optimization problem, which significantly outperforms
the previous approach by direct convex programming, in terms
of speed, computation load and handling large-scale images. Its numerical
scheme can by easily implemented and accelerated by the advanced
computation framework, e.g. GPU.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/50232011-01-01T00:00:00ZA Fast Continuous Max-Flow Approach to Non-Convex Multilabeling Problems
http://hdl.handle.net/1956/5021
A Fast Continuous Max-Flow Approach to Non-Convex Multilabeling Problems
Bae, Egil; Yuan, Jing; Tai, Xue-Cheng; Boykov, Yuri
Preprint
This work addresses a class of multilabeling problems over a spatially continuous image domain,
where the data fidelity term can be any bounded function, not necessarily convex. Two total variation based regularization
terms are considered, the first favoring a linear relationship between the labels and the second independent
of the label values (Pott’s model). In the spatially discrete setting, Ishikawa [33] showed that the first of these labeling
problems can be solved exactly by standard max-flow and min-cut algorithms over specially designed graphs.
We will propose a continuous analogue of Ishikawa’s graph construction [33] by formulating continuous max-flow
and min-cut models over a specially designed domain. These max-flow and min-cut models are equivalent under a
primal-dual perspective. They can be seen as exact convex relaxations of the original problem and can be used to
compute global solutions. Fast continuous max-flow based algorithms are proposed based on the max-flow models
whose efficiency and reliability can be validated by both standard optimization theories and experiments. In comparison
to previous work [53, 52] on continuous generalization of Ishikawa’s construction, our approach differs in
the max-flow dual treatment which leads to the following main advantages: A new theoretical framework which
embeds the label order constraints implicitly and naturally results in optimal labeling functions taking values in any
predefined finite label set; A more general thresholding theorem which allows to produce a larger set of non-unique
solutions to the original problem; Numerical experiments show the new max-flow algorithms converge faster than
the fast primal-dual algorithm of [53, 52]. The speedup factor is especially significant at high precisions. In the
end, our dual formulation and algorithms are extended to a recently proposed convex relaxation of Pott’s model [50],
thereby avoiding expensive iterative computations of projections without closed form solution.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/50212011-01-01T00:00:00ZA Study on Continuous Max-Flow and Min-Cut Approaches
http://hdl.handle.net/1956/5020
A Study on Continuous Max-Flow and Min-Cut Approaches
Yuan, Jing; Bae, Egil; Tai, Xue-Cheng
Chapter; Peer reviewed
We propose and investigate novel max-flow models in the spatially continuous setting, with or without
supervised constraints, under a comparative study of graph based max-flow / min-cut. We show that the continuous
max-flow models correspond to their respective continuous min-cut models as primal and dual problems, and the
continuous min-cut formulation without supervision constraints regards the well-known Chan-Esedoglu-Nikolova
model [15] as a special case. In this respect, basic conceptions and terminologies applied by discrete max-flow / mincut
are revisited under a new variational perspective. We prove that the associated nonconvex partitioning problems,
unsupervised or supervised, can be solved globally and exactly via the proposed convex continuous max-flow and
min-cut models. Moreover, we derive novel fast max-flow based algorithms whose convergence can be guaranteed
by standard optimization theories. Experiments on image segmentation, both unsupervised and supervised, show
that our continuous max-flow based algorithms outperform previous approaches in terms of efficiency and accuracy.
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. (An extended journal version).
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/50202010-01-01T00:00:00ZGlobal Minimization for Continuous Multiphase Partitioning Problems Using a Dual Approach
http://hdl.handle.net/1956/5019
Global Minimization for Continuous Multiphase Partitioning Problems Using a Dual Approach
Bae, Egil; Yuan, Jing; Tai, Xue-Cheng
Peer reviewed; Journal article
This paper is devoted to the optimization problem
of continuous multi-partitioning, or multi-labeling, which is
based on a convex relaxation of the continuous Potts model.
In contrast to previous efforts, which are tackling the optimal
labeling problem in a direct manner, we first propose a
novel dual model and then build up a corresponding dualitybased
approach. By analyzing the dual formulation, sufficient
conditions are derived which show that the relaxation
is often exact, i.e. there exists optimal solutions that are also
globally optimal to the original nonconvex Potts model. In
order to deal with the nonsmooth dual problem, we develop
a smoothing method based on the log-sum exponential function
and indicate that such a smoothing approach leads to a
novel smoothed primal-dual model and suggests labelings
with maximum entropy. Such a smoothing method for the
dual model also yields a new thresholding scheme to obtain approximate solutions. An expectation maximization
like algorithm is proposed based on the smoothed formulation
which is shown to be superior in efficiency compared to
earlier approaches from continuous optimization. Numerical
experiments also show that our method outperforms several
competitive approaches in various aspects, such as lower energies
and better visual quality.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/50192010-01-01T00:00:00ZEfficient Global Minimization for the Multiphase Chan-Vese Model of Image Segmentation
http://hdl.handle.net/1956/5018
Efficient Global Minimization for the Multiphase Chan-Vese Model of Image Segmentation
Bae, Egil; Tai, Xue-Cheng
Peer reviewed; Journal article
The Mumford-Shah model is an important variational image segmentation
model. A popular multiphase level set approach, the Chan-Vese
model, was developed as a numerical realization by representing the phases
by several overlapping level set functions. Recently, a variant representation
of the Chan-Vese model with binary level set functions was proposed.
In both approaches, the gradient descent equations had to be solved numerically,
a procedure which is slow and has the potential of getting stuck
in a local minima.
In this work, we develop an efficient and global minimization method
for a discrete version of the level set representation of the Chan-Vese model
with 4 regions (phases), based on graph cuts. If the average intensity values
of the different regions are sufficiently evenly distributed, the energy
function is submodular. It is shown theoretically and experimentally that
the condition is expected to hold for the most commonly used data terms.
We have also developed a method for minimizing nonsubmodular functions,
that can produce global solutions in practice should the condition
not be satisfied, which may happen for the L1 data term.
In: Proc. Seventh International Conference on EnergyMinimization Methods in Computer Vision and Pattern Recognition (EMMCVPR
2009), pg.28-41, Lecture Notes in Computer Science, Springer, Berlin, 2009. (Extended journal version).
Thu, 01 Jan 2009 00:00:00 GMThttp://hdl.handle.net/1956/50182009-01-01T00:00:00ZEfficient global minimization methods for variational problems in imaging and vision
http://hdl.handle.net/1956/5017
Efficient global minimization methods for variational problems in imaging and vision
Bae, Egil
Doctoral thesis
Energy minimization has become one of the most important paradigms for formulating
image processing and computer vision problems in a mathematical language.
Energy minimization models have been developed in both the variational
and discrete optimization community during the last 20-30 years. Some models
have established themselves as fundamentally important and arise over a wide
range of applications.
One fundamental challenge is the optimization aspect. The most desirable
models are often the most difficult to handle from an optimization perspective.
Continuous optimization problems may be non-convex and contain many inferior
local minima. Discrete optimization problems may be NP-hard, which means
algorithms are unlikely to exist which can always compute exact solutions without
an unreasonable amount of effort.
This thesis contributes with efficient optimization methods which can compute
global or close to global solutions to important energy minimization models in
imaging and vision. New insights are given in both continuous and combinatorial
optimization, as well as a strengthening of the relationships between these fields.
One problem that is extensively studied is minimal perimeter partitioning
problems with several regions, which arise naturally in e.g. image segmentation
applications and is NP-hard in the discrete context. New methods are developed
that can often compute global solutions and otherwise very close approximations
to global solutions. Experiments show the new methods perform significantly
better than earlier variational approaches, like the level set method, and earlier
combinatorial optimization approaches. The new algorithms are significantly
faster than previous continuous optimization approaches.
In the discrete community, max-flow and min-cut (graph cuts) have gained
huge popularity because they can efficiently compute global solutions to certain
energy minimization models. It is shown that new types of problems can be solved
exactly by max-flow and min-cut. Furthermore, variational generalizations of
max-flow and min-cut are proposed which bring the global optimization property
to the continuous setting, while avoiding grid bias and metrication errors which
are major disadvantages of the discrete models. Convex optimization algorithms
are derived from the variational max-flow models, which are very efficient and
are more parallel friendly than traditional combinatorial algorithms.
Fri, 19 Aug 2011 00:00:00 GMThttp://hdl.handle.net/1956/50172011-08-19T00:00:00ZSub-Riemannian geometry of spheres and rolling of manifolds
http://hdl.handle.net/1956/4822
Sub-Riemannian geometry of spheres and rolling of manifolds
Molina, Mauricio Godoy
Doctoral thesis
Fri, 29 Apr 2011 00:00:00 GMThttp://hdl.handle.net/1956/48222011-04-29T00:00:00ZPotensialet for dyp geotermisk energi i Norge: Modellering av varmeutvinning fra konstruerte geotermiske system
http://hdl.handle.net/1956/4726
Potensialet for dyp geotermisk energi i Norge: Modellering av varmeutvinning fra konstruerte geotermiske system
Brun, Knut-Erland
Master thesis
Dette prosjektet tar for seg to ulike modeller for et konstruert geotermisk system; et lukket enkeltbrønn-system og et åpent, sprekkdominert system. Enkeltbrønn-systemet er modellert eksplisitt, og modellen er også tilpasset to kommersielle system bl.a. Rock Energy sine planer tilknyttet fjernvarme utenfor Oslo. Modellen er undersøkt både i et kort tidsperspektiv (500 dager) og et lengre tidsperspektiv (50 år). Det åpne systemet er forsøkt modellert som et homogent porøst medium, hvor porøsitet og permeabilitet er bestem ut fra egenskaper ved sprekkene. Imidlertid har denne modellen vist seg langt vanskeligere å utforme enn først antatt, med problemer knyttet til stabilisering av de tidsavhengige temperaturligningene. Disse problemene har ikke latt seg løse innenfor tidsrammen til prosjektet.
Resultatene fra enkeltbrønnmodellen viser at for produksjonsrater i størrelsesorden 3-10 kg/s, vil man ved aktuelle geotermiske forhold og injeksjonstemperatur på 40 grader C, produsere vann med en temperatur ca 40-50 grader under reservoartemperatur. Dette tilsvarer termisk effekt på 0.1-0.5 MW. Ved høyere injeksjonstemperatur (60 grader C) er det mulig å produsere ved temperaturer opp mot 75 grader C i et lengre tidsperspektiv, men den termiske effekten blir da lavere; 0.1-0.4 MW.
Tue, 05 Oct 2010 00:00:00 GMThttp://hdl.handle.net/1956/47262010-10-05T00:00:00ZTwo Level Additive Schwarz Preconditioner For Control Volume Finite Element Methods
http://hdl.handle.net/1956/4725
Two Level Additive Schwarz Preconditioner For Control Volume Finite Element Methods
Gundersen, Kristian
Master thesis
In this thesis we investigate nummerically the convergence properties of the control volume finite element method (CVFEM) preconditioned with a two level overlapping additive Schwarz method. Relevant theory regarding the CVFEM, the Schwarz framwork and the iterative solver Genral Minimal Residual Method is explained.
Wed, 22 Sep 2010 00:00:00 GMThttp://hdl.handle.net/1956/47252010-09-22T00:00:00ZHierarchical Bayesian Survival Analysis of Age-Specific Data From Birds' Nests
http://hdl.handle.net/1956/4724
Hierarchical Bayesian Survival Analysis of Age-Specific Data From Birds' Nests
Willgohs, Niejing
Master thesis
In this thesis, I first present the grassland birds data from Wells(2007) which is used by several different methods of estimating the nest survival rates. The hierarchical Bayesian method from Cao(2009) then is introduced as a new model to estimate nest-specific survival rates with double censored, left-truncated data. I compare two methods and during the comparison, cox-proportional model and intrinsic autoregressive prior are studied
In the second half of this thesis, different data analysis methods are introduced, the deviance information criterion is presented and the Bayesian method is compared with the Mayfield method.
The hierarchical Bayesian method is relatively new and is a complicated model indeed for those people who are not familiar with the Bayesian and higher dimension of integration. Nevertheless, it is still a valuable statistical tool. The deviance information criterion is a new method of analyses data; users could choose the different priors in order to get different estimating results, therefore it is very applicable in the statistical world.
Fri, 19 Nov 2010 00:00:00 GMThttp://hdl.handle.net/1956/47242010-11-19T00:00:00ZHistological type and grade of breast cancer tumors by parity, age at birth, and time since birth: a register-based study in Norway
http://hdl.handle.net/1956/4684
Histological type and grade of breast cancer tumors by parity, age at birth, and time since birth: a register-based study in Norway
Albrektsen, Grethe; Heuch, Ivar; Thoresen, Steinar Ø.
Peer reviewed; Journal article
Background
Some studies have indicated that reproductive factors affect the risk of histological types of breast cancer differently. The long-term protective effect of a childbirth is preceded by a short-term adverse effect. Few studies have examined whether tumors diagnosed shortly after birth have specific histological characteristics.
Methods
In the present register-based study, comprising information for 22,867 Norwegian breast cancer cases (20-74 years), we examined whether histological type (9 categories) and grade of tumor (2 combined categories) differed by parity or age at first birth. Associations with time since birth were evaluated among 9709 women diagnosed before age 50 years. Chi-square tests were applied for comparing proportions, whereas odds ratios (each histological type vs. ductal, or grade 3-4 vs. grade 1-2) were estimated in polytomous and binary logistic regression analyses.
Results
Ductal tumors, the most common histological type, accounted for 81.4% of all cases, followed by lobular tumors (6.3%) and unspecified carcinomas (5.5%). Other subtypes accounted for 0.4%-1.5% of the cases each. For all histological types, the proportions differed significantly by age at diagnoses. The proportion of mucinous and tubular tumors decreased with increasing parity, whereas Paget disease and medullary tumors were most common in women of high parity. An increasing trend with increasing age at first birth was most pronounced for lobular tumors and unspecified carcinomas; an association in the opposite direction was seen in relation to medullary and tubular tumors. In age-adjusted analyses, only the proportions of unspecified carcinomas and lobular tumors decreased significantly with increasing time since first and last birth. However, ductal tumors, and malignant sarcomas, mainly phyllodes tumors, seemed to occur at higher frequency in women diagnosed <2 years after first childbirth. The proportions of medullary tumors and Paget disease were particularly high among women diagnosed 2-5 years after last birth. The high proportion of poorly differentiated tumors in women with a recent childbirth was partly explained by young age.
Conclusion
Our results support previous observations that reproductive factors affect the risk of histological types of breast cancer differently. Sarcomas, medullary tumors, and possible also Paget disease, may be particularly susceptible to pregnancy-related exposure.
Fri, 21 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/46842010-05-21T00:00:00ZArachnoid cysts do not contain cerebrospinal fluid: A comparative chemical analysis of arachnoid cyst fluid and cerebrospinal fluid in adults
http://hdl.handle.net/1956/4674
Arachnoid cysts do not contain cerebrospinal fluid: A comparative chemical analysis of arachnoid cyst fluid and cerebrospinal fluid in adults
Berle, Magnus; Wester, Knut; Ulvik, Rune Johan; Kroksveen, Ann Cathrine; Haaland, Øystein; Amiry-Moghaddam, Mahmood; Berven, Frode S.
Journal article; Peer reviewed
Background
Arachnoid cyst (AC) fluid has not previously been compared with cerebrospinal fluid (CSF) from the same patient. ACs are commonly referred to as containing "CSF-like fluid". The objective of this study was to characterize AC fluid by clinical chemistry and to compare AC fluid to CSF drawn from the same patient. Such comparative analysis can shed further light on the mechanisms for filling and sustaining of ACs.
Methods
Cyst fluid from 15 adult patients with unilateral temporal AC (9 female, 6 male, age 22-77y) was compared with CSF from the same patients by clinical chemical analysis.
Results
AC fluid and CSF had the same osmolarity. There were no significant differences in the concentrations of sodium, potassium, chloride, calcium, magnesium or glucose. We found significant elevated concentration of phosphate in AC fluid (0.39 versus 0.35 mmol/L in CSF; p = 0.02), and significantly reduced concentrations of total protein (0.30 versus 0.41 g/L; p = 0.004), of ferritin (7.8 versus 25.5 ug/L; p = 0.001) and of lactate dehydrogenase (17.9 versus 35.6 U/L; p = 0.002) in AC fluid relative to CSF.
Conclusions
AC fluid is not identical to CSF. The differential composition of AC fluid relative to CSF supports secretion or active transport as the mechanism underlying cyst filling. Oncotic pressure gradients or slit-valves as mechanisms for generating fluid in temporal ACs are not supported by these results.
Thu, 10 Jun 2010 00:00:00 GMThttp://hdl.handle.net/1956/46742010-06-10T00:00:00ZSub-semi-Riemannian geometry on Heisenberg-type groups
http://hdl.handle.net/1956/4604
Sub-semi-Riemannian geometry on Heisenberg-type groups
Korolko, Anna
Doctoral thesis
Fri, 04 Feb 2011 00:00:00 GMThttp://hdl.handle.net/1956/46042011-02-04T00:00:00ZLattice-Boltzmann modelling of spatial variation in surface tension and wetting effects
http://hdl.handle.net/1956/4582
Lattice-Boltzmann modelling of spatial variation in surface tension and wetting effects
Stensholt, Sigvat Kuekiatngam
Doctoral thesis
The main focus of the project which is the foundation of the thesis is to study
the effects spatial variations in capillary effects have on fluid flow. This requires
knowledge of the underlying fluid flow equations, and some sort of numerical
method to handle numerical simulations.
The first part of this thesis deals with the underlying theory of the methods.
Classical fluid theory has a brief introduction, followed by the lattice Boltzmann
theory and methods for two phase flow and some basic simulations to demonstrate
how the two-phase flow methods work. Papers which have been submitted or
presented make up the second part of the thesis.
Chapter 1 in the thesis covers classical fluid flow. A particular emphasis is
placed on the capillary effects of surface tension, and wetting. The chapter also
covers theory on surfactants and their applications.
Chapter 2 covers the foundations of the lattice Boltzmann method which I used
throughout the project. The methods are a fairly new way of simulating fluid flow,
even though it is based on the century old Boltzmann equation. The underlying
algorithms are simple, but the simulated flow can be quite complex.
Chapter 3 covers how the lattice Boltzmann method deals with more complex
flow. This includes multiphase flow and flow with solute components. Variations
in surface tension are also covered.
Chapter 4 illustrates some of the properties found in the multiphase lattice
Boltzmann methods used in this thesis.
Chapter 5 is a summary of the work conducted. A short description of the
papers included in the second part of this thesis is provided. The chapter also
considers further extensions and possible improvements to the work in this thesis.
Thu, 26 Aug 2010 00:00:00 GMThttp://hdl.handle.net/1956/45822010-08-26T00:00:00ZRobust Control Volume Methods for Reservoir Simulation on Challenging Grids
http://hdl.handle.net/1956/4577
Robust Control Volume Methods for Reservoir Simulation on Challenging Grids
Keilegavlen, Eirik
Doctoral thesis
Fri, 19 Feb 2010 00:00:00 GMThttp://hdl.handle.net/1956/45772010-02-19T00:00:00ZOn Semi-implicit Splitting Schemes for the Beltrami Color Image Filtering
http://hdl.handle.net/1956/4541
On Semi-implicit Splitting Schemes for the Beltrami Color Image Filtering
Rosman, Guy; Dascal, Lorina; Tai, Xue-Cheng; Kimmel, Ron
Peer reviewed; Journal article
The Beltrami flow is an efficient nonlinear filter, that was shown to be effective for color image processing. The corresponding anisotropic diffusion operator strongly couples the spectral components. Usually, this flow is implemented by explicit schemes, that are stable only for very small time steps and therefore require many iterations. In this paper we introduce a semi-implicit Crank-Nicolson scheme based on locally one-dimensional (LOD)/additive operator splitting (AOS) for implementing the anisotropic Beltrami operator. The mixed spatial derivatives are treated explicitly, while the non-mixed derivatives are approximated in an implicit manner. In case of constant coefficients, the LOD splitting scheme is proven to be unconditionally stable. Numerical experiments indicate that the proposed scheme is also stable in more general settings. Stability, accuracy, and efficiency of the splitting schemes are tested in applications such as the Beltrami-based scale-space, Beltrami denoising and Beltrami deblurring. In order to further accelerate the convergence of the numerical scheme, the reduced rank extrapolation (RRE) vector extrapolation technique is employed.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/45412011-01-01T00:00:00ZBridging the ensemble Kalman filter and particle filters: the adaptive Gaussian mixture filter
http://hdl.handle.net/1956/4540
Bridging the ensemble Kalman filter and particle filters: the adaptive Gaussian mixture filter
Stordal, Andreas Størksen; Karlsen, Hans A.; Nævdal, Geir; Skaug, Hans J.; Vallès, Brice
Peer reviewed; Journal article
The nonlinear filtering problem occurs in many scientific areas. Sequential Monte Carlo solutions with the correct asymptotic behavior such as particle filters exist, but they are computationally too expensive when working with high-dimensional systems. The ensemble Kalman filter (EnKF) is a more robust method that has shown promising results with a small sample size, but the samples are not guaranteed to come from the true posterior distribution. By approximating the model error with a Gaussian distribution, one may represent the posterior distribution as a sum of Gaussian kernels. The resulting Gaussian mixture filter has the advantage of both a local Kalman type correction and the weighting/resampling step of a particle filter. The Gaussian mixture approximation relies on a bandwidth parameter which often has to be kept quite large in order to avoid a weight collapse in high dimensions. As a result, the Kalman correction is too large to capture highly non-Gaussian posterior distributions. In this paper, we have extended the Gaussian mixture filter (Hoteit et al., Mon Weather Rev 136:317–334, 2008) and also made the connection to particle filters more transparent. In particular, we introduce a tuning parameter for the importance weights. In the last part of the paper, we have performed a simulation experiment with the Lorenz40 model where our method has been compared to the EnKF and a full implementation of a particle filter. The results clearly indicate that the new method has advantages compared to the standard EnKF.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/45402010-01-01T00:00:00ZModelling the non-Hydrostatic Pressure in σ-coordinate Ocean Models
http://hdl.handle.net/1956/4528
Modelling the non-Hydrostatic Pressure in σ-coordinate Ocean Models
Keilegavlen, Eirik
Master thesis
Fri, 22 Sep 2006 00:00:00 GMThttp://hdl.handle.net/1956/45282006-09-22T00:00:00ZNumerical Modeling of Supercritical ‘Out-Salting’ in the “Atlantis II Deep” (Red Sea) Hydrothermal System
http://hdl.handle.net/1956/4496
Numerical Modeling of Supercritical ‘Out-Salting’ in the “Atlantis II Deep” (Red Sea) Hydrothermal System
Hovland, Martin; Rueslåtten, Håkon; Kuznetsova, Tatiana; Kvamme, Bjørn; Fladmark, Gunnar E.; Johnsen, Hans Konrad
Peer reviewed; Journal article
Supercritical water behaves close to that of a non-polar fluid and its ability to dissolve salt ions is very low. In
rifting locations world-wide, where hot vents occur, it has been shown that seawater attains supercritical conditions. We
therefore anticipate that circulation of seawater in hydrothermal systems passing through regions of the supercritical domain
results in spontaneous precipitation of salt particles. Thus, the hot ‘geysers’ of saturated brines observed in the ‘Atlantis
II Deep’ of the Red Sea could result from re-dissolution of salts accumulated in underground fracture systems. Here
we report on an advanced numerical modeling study which demonstrates, for the first time, that there is a forced convection
cell where salts precipitate and accumulate. These combined numerical thermodynamic simulations and basin modelling
results also demonstrate that hot brines reflux back to surface immediately above the magma chamber located beneath
the axis of the rift. Based on this study, we predict that hydrothermal ‘out-salting’ is the main cause of dense, warm brines
accumulating in the central portion of the Red Sea. Furthermore, the results are relevant for understanding how large volumes
of evaporites (salts) accumulate along rifted plate margins.
Mon, 01 Jan 2007 00:00:00 GMThttp://hdl.handle.net/1956/44962007-01-01T00:00:00ZSome new constructions of asymptotically good code sequences
http://hdl.handle.net/1956/4467
Some new constructions of asymptotically good code sequences
Rasmussen, Nils Henry
Master thesis
Sun, 01 Jan 2006 00:00:00 GMThttp://hdl.handle.net/1956/44672006-01-01T00:00:00ZFeasibility of simplified integral equation modeling of low-frequency marine CSEM with a resistive target
http://hdl.handle.net/1956/4464
Feasibility of simplified integral equation modeling of low-frequency marine CSEM with a resistive target
Bakr, Shaaban Ali; Mannseth, Trond
Peer reviewed; Journal article
We have assessed the accuracy of a simplified integral
equation (SIE) modeling approach for marine controlledsource
electromagnetics (CSEM) with low applied frequencies
and a resistive target. The most computationally intensive
part of rigorous integral equation (IE) modeling is the
computation of the anomalous electric field within the target
itself. This leads to amatrix problem with a dense coefficient
matrix. It is well known that, in general, the presence of many
grid cells creates a computational disadvantage for densematrix
methods compared to sparse-matrix methods. The
SIE approach replaces the dense-matrix part of rigorous IE
modeling by sparse-matrix calculations based on an approximation
of Maxwell’s equations. The approximation is justified
theoretically if a certain dimensionless parameter β is
small. As opposed to approximations of the Born type, the validity
of the SIE approach does not rely on the anomalous
field to be small in comparison with the background field in
the target region. We have calculated β for a range of parameter
values typical for marine CSEM, and compared the SIE
approach numerically to the rigorous IE method and to the
quasi-linear (QL) and quasi-analytic (QA) approximate solutions.
It is found that the SIE approach is very accurate for
small β, corresponding to frequencies in the lower range of
those typical for marine CSEM for petroleum exploration. In
addition, the SIE approach is found to be significantly more
accurate than the QL and QA approximations for small β.
Thu, 01 Jan 2009 00:00:00 GMThttp://hdl.handle.net/1956/44642009-01-01T00:00:00ZAn approximate hybrid method for modeling of electromagnetic scattering from an underground target
http://hdl.handle.net/1956/4460
An approximate hybrid method for modeling of electromagnetic scattering from an underground target
Bakr, Shaaban Ali
Doctoral thesis
The detection of a potential petroleum reservoir is achieved through inversion of electromagnetic data acquired in sea floor receivers. Inversion of electromagnetic data requires a number of repeated solves of the mathematical/numerical model in an iteration process. The computational efficiency of the solver will therefore have great impact of the computational efficiency of the inversion. Various types of solvers, like finite difference, finite element, integral equation (IE), and hybrid methods, have been applied. The different types of methods have different computational advantages and disadvantages. With rigorous IE, a dense-matrix problem must be solved. For problems involving many grid cells in the target region, a huge computational effort is then needed.
The main focus of this thesis is to develop and analyze a fast and accurate hybrid method, simplified IE (SIE) modeling, for modeling of electromagnetic scattering from an underground target. The method consists of solving a finite volume problem in a localized region containing the target, and using the IE method to obtain the field outside that region. The hybrid method thus replaces the dense-matrix part of the rigorous IE method by sparse matrix calculations based on an approximation of Maxwell’s Equations.
The thesis shows that both theoretical and practical computational performance of SIE is orders of magnitude better than that of rigorous IE on large-scale problems. Also, SIE produced excellent accuracy for typical CSEM frequencies. The thesis presents a novel order-of-magnitude analysis for Maxwell’s equations and a proper assessment of the range of validity of a novel approximate method for CSEM.
Thu, 09 Dec 2010 00:00:00 GMThttp://hdl.handle.net/1956/44602010-12-09T00:00:00ZMatematikk i tenårene - Matematikksamtalen som verktøy
http://hdl.handle.net/1956/4261
Matematikk i tenårene - Matematikksamtalen som verktøy
Rosef, Ines Haugland
Master thesis
Matematikksamtaler slik begrepet blir brukt i denne teksten legger vekt på kommunikasjon og samarbeid mellom elever som vei til matematisk kunnskap, og bygger således på et sosiokulturelt læringssyn. Samtalene går ut på at elever blir tatt ut av klasserommet i små grupper der de diskuterer problemløsningsoppgaver. Denne teksten ser særlig på hvordan matematikksamtaler passer inn for elever i tenårene. I denne perioden går elevene gjennom en kognitiv utvikling fra barn til voksen der også evne til å tenke abstrakt utvikles. Denne utviklingen er en prosess som ikke inntreffer ved en gitt alder, noe som betyr at man i ett klasserom vil finne elever på ulike stadier. Gjennom presentasjon av generell teori sammen med ni utførte matematikksamtaler i to ulike land, Norge og Sør-Afrika, prøver teksten å vise hvordan tilpassing til elevers kognitive utvikling gjennom matematikksamtalen fremmer læring og forståelse for matematikk i tenårene, samt fremmer skolen og matematikklasserommet som arena for opplæring i demokrati. I tillegg vil elevene gjennom matematikksamtaler møte en matematikk som blir mer abstrakt i deres eget tempo. Elevene får også opplæring i samtaleteknikk, noe som gir dem et godt grunnlag for å lære matematikk, både i skolen nå, men også ved problemstillinger senere i livet. Teksten presenterer også hvordan læreren som "leder" av samtalen må forholde seg ulikt til svake og sterke elever for å fremme vekst og læringsutbytte i samtalene, og hvordan lærerens orientering spiller en rolle for at matematikksamtaler skal være vellykket.
Tue, 25 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/42612010-05-25T00:00:00ZNon-commutative Harmonic Analysis: Generalization of Phase Correlation to the Euclidean Motion Group
http://hdl.handle.net/1956/4226
Non-commutative Harmonic Analysis: Generalization of Phase Correlation to the Euclidean Motion Group
Loneland, Atle
Master thesis
This thesis is about noncommutative harmonic analysis, and generalization of phase correlation to the euclidean motion group and application in image processing.
Thu, 10 Jun 2010 00:00:00 GMThttp://hdl.handle.net/1956/42262010-06-10T00:00:00ZNon-Slit Solutions to the Loewner Equation
http://hdl.handle.net/1956/4225
Non-Slit Solutions to the Loewner Equation
Ivanov, Georgy
Master thesis
We study conditions on the driving term of the Loewner equation which guarantee that the equation generates slits. We construct a new family of examples of non-slit solutions which generalize Kufarev's example, in which the local Lipschitz characteristic of the driving term takes values from approx. 4.01 to plus infinity. This should be compared to results by Marshall, Rohde and Lind, which state that driving terms with local Lipschitz characteristic less than 4 always generate slit solution.
Fri, 30 Apr 2010 00:00:00 GMThttp://hdl.handle.net/1956/42252010-04-30T00:00:00ZChain Ladder Metoden og Mack's Modell sammenlignet med andre Poissonbaserte modeller
http://hdl.handle.net/1956/4224
Chain Ladder Metoden og Mack's Modell sammenlignet med andre Poissonbaserte modeller
Sagosen, Monica
Master thesis
Ønsker å se på reservering til fremtidige krav. Vil først og fremst se på Chain Ladder metoden og Mack sin modell. Vil så forsøke å simulere data etter en modell som passer med Mack sine tre antakelser, den sammensatte Poisson modellen. Vil så tilpasse en overdispersert Poisson regresjon til dataene, før vi til slutt prøver å konkludere med hvilken metode som er å foretrekke, basert på de funn vi har gjort.
Sun, 30 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/42242010-05-30T00:00:00ZA discussion on turbulent and undular bores using the models from the shallow water equations and the dispersive system.
http://hdl.handle.net/1956/4211
A discussion on turbulent and undular bores using the models from the shallow water equations and the dispersive system.
Eikeland, Erik
Master thesis
The bore is a wave phenomenon that occurs in channels and rivers of shallow water. Referred to as a discharge wave, a bore is generated by a sudden increase of water flow. Bores appear in certain rivers as the tide pushes water into the river mouth. Simply described a bore is a transition between two uniform flows of different flow depth. The point of transition is referred to as the bore front. The transition between the two states of flow is often marked by violent turbulence. But there are also bores in which no turbulence is observed. Such bores are called weak bores, as in these cases, the difference in height between the two flow depths are small. The weak bore displays a unique character not present in the turbulent bore. It carries a train of undulating waves just behind the front. For this reason it is often called the undular bore.
This thesis aims to give an introductory summary on the theory of the bore phenomenon and lay out a map for further study. In this way it ought to serve as a good introduction for those not familiar with the subject and a good repetition for those who are.
In the following investigation two models of the bore will be presented in which water is treated as an ideal fluid. The first model is reached by assuming hydrostatic pressure. This gives the shallow water equations. These equations model the bore as a travelling discontinuity separating two uniform flow depths. The water flow is steady, conserving mass and momentum, but the water loses energy as it passes through the bore front. This energy loss was first pointed out by Rayleigh in and is commonly referred to as the classical energy loss. The energy loss is a trait that coincides well with turbulent bores and the shallow water equations model these bores quite well.
The shallow water equations do not model the undular bores as they would not sustain the undulating waves behind the bore front. A second model is based on a dispersive system. This is an extension of the shallow water equations where, effectively, the treatment of pressure is refined. It leads to various Boussinesq systems and in a specialized case, of all fluid moving in one direction, it leads to the well know KdV equation. Modelling a bore-like initial value by a dispersive system brings out the undulations behind the bore front, however it does not capture the full nature of the undular bore itself.
In this thesis we support the conclusion that adding frictional effects of the boundary will improve the dispersive model of the undular bore. But the arguments leading to this conclusion will be debated.
Wed, 26 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/42112010-05-26T00:00:00ZMål for asymmetri og kurtose
http://hdl.handle.net/1956/4210
Mål for asymmetri og kurtose
Vestrheim, Arne
Master thesis
Critchley og Jones (2008) innfører nye mål for asymmetri og kurtose. Denne oppgåva drøftar måla som teoretiske mål og med tanke på estimering.
Tue, 13 Apr 2010 00:00:00 GMThttp://hdl.handle.net/1956/42102010-04-13T00:00:00ZModeling Dissolution within Vertically Averaged Formulations
http://hdl.handle.net/1956/4209
Modeling Dissolution within Vertically Averaged Formulations
Mykkeltvedt, Trine Solberg
Master thesis
In this thesis we develop a mathematical model describing the migration and trapping of CO2 when injecting it in a deep saline aquifer. Both the migration and trapping processes are inherently complex, spanning multiple spatial and temporal scales. We develop a upscaled mass transfer model for these processes within vertically averaged formulations. This model is applied to a benchmark problem designed to highlight important questions about the long term fate of the injected CO2.
In the developed model the effect of dissolution trapping is included. When considering dissolution trapping we distinguish between dissolution due to equilibration between mobile CO2 and brine as CO2 drains a region with pure brine, and dissolution due to density driven convective mixing. Using the developed model we have studied the plume migration with and without the effect of convective mixing and looked at the influence the value of the dissolution rate has on the tip velocity. Our results shows that the value of the dissolution rate has a great impact on the tip velocity. We find that the tip velocity has two characteristic values depending on the dissolution rate.
Thu, 10 Jun 2010 00:00:00 GMThttp://hdl.handle.net/1956/42092010-06-10T00:00:00ZOn the use of vertically averaged models to simulate CO2 migration in a layered saline aquifer
http://hdl.handle.net/1956/4208
On the use of vertically averaged models to simulate CO2 migration in a layered saline aquifer
El-Faour, Basil Fayez
Master thesis
Geologic and flow characteristics such as permeability and porosity, capillary pressure, geologic structure, and thickness all influence and affect CO2 plume distribution to varying degrees. These parameters do not necessarily act independently. Depending on the variations in these parameters one may dominate the shape and size of the plume.
In this master thesis, we consider the long-term fate and migration of a large CO2 plume that takes place in a heterogeneous (two-layer) sloping saline aquifer.
We consider a vertical equilibrium (VE) mathematical model to study the effect of two different permeability layers on the shape, speed and migrated distance of the CO2 plume. The layer-permeability-ratio is k2/k1, where k2 and k1 are the permeabilities in the upper and lower layer of the aquifer, respectively. We also study the effect the thickness ratio of the lower permeability(h=H/2, h=H/4, h=H/8), where H is the thickness of the aquifer. We attain these goals by comparing the simulation results of Eclipse and VE simulators, where both simulate the movement of CO2 plume in homogeneous and layered aquifers.
A VE model has been built considering one-dimensional flow in the x-direction, due to the big difference in scale length between vertical and horizontal directions. We model a 2D vertical section in Eclipse simulator, taking the vertically averaged of this section ends up with a 1D results that can compared with the VE solution.
Our results shows that the variations in the vertical permeability layers may have a dramatic effects on the CO2 plume shape. Relatively lower permeability layer reduces the velocity of CO2 through it, and an increase of CO2 saturation occurs below this layer. At early time, the build up in saturation increases, and the lateral growth of the CO2 immediately below this layer increases. At later time, the saturation decreases and the vertical flow of the CO2 in this layer increases. The k2/k1 ratio and thickness of the lower permeability layer determines the plume shape and distance migrated. In some of our simulations, the results show two connected/disconnected plumes.
Tue, 01 Jun 2010 00:00:00 GMThttp://hdl.handle.net/1956/42082010-06-01T00:00:00ZMarkovmodulerte Poisson-prosessar som modellar for dykketidsdata
http://hdl.handle.net/1956/4207
Markovmodulerte Poisson-prosessar som modellar for dykketidsdata
Mossestad, Mona Christin
Master thesis
Alle pattedyr som lever i sjøen må opp til overlata for å puste, og det er når dyra bryt overflata at det er mogeleg for oss å observere dei. Det er ikkje alltid lett å oppdage dyra, og sannsynligheten for å oppdage eit dyr avheng av korleis og kor ofte dyra dukkar opp. Dette blir det tatt omsyn til når bestandane skal estimerast. Det er då nyttig å ha ein modell for dykkemønsteret. Dersom tida tilbrakt i overflaten er liten samanlikna med varigheita av dykket, er det rimelig å beskrive fenomenet matematisk som en stokastisk punktprosess i tid. Kvart "blåst" vil då være eit punkt, og tida mellom blåst vil variere stokastisk. Ved estimering av bestandar av vågekval har det vore utstrakt bruk av homogene Poisson-prosessar for å modellere dykketidene. Denne oppgåva gir ei vurdering av Poisson-modellen, og foreslår ei forbetring med Markovmodulerte Poisson-prosessar.
Tue, 23 Mar 2010 00:00:00 GMThttp://hdl.handle.net/1956/42072010-03-23T00:00:00ZA Lanczos-view on Independent Component Analysis of fMRI Data
http://hdl.handle.net/1956/4204
A Lanczos-view on Independent Component Analysis of fMRI Data
Hanson, Erik Andreas
Master thesis
Analysis of resting-state fMRI data is commonly done by a combination of the two signal processing methods Principal Component Analysis (PCA) and Independent Component analysis (ICA). In this thesis, a possible error caused by the combination of the two methods are pointed out. The error is described theoretically and by several examples. Furthermore a new, alternative algorithm is introduced. The new algorithm is performing the ICA by a Lanczos method on a four dimensional tensor without a PCA preprocessing step and may thereby overcome some of the possible errors. This Lanczos-based method is suited to deal with large datasets where only a limited number of components are interesting. The convergence of the method, and thereby the ordering of the independent components, are heavily dependent of the spectral properties of the data. Without prior knowledge of the eigenvalues, the Lanczos-based method may give unsatisfactory results. Nevertheless, the framework in which the Lanczos-ICA method is based, proves to be a powerful base for future ICA methods and fMRI analysis algorithms.
Fri, 04 Jun 2010 00:00:00 GMThttp://hdl.handle.net/1956/42042010-06-04T00:00:00ZA Dense Gas Model of Combined Transports and Distributions of Solutes in the Interstitium including Steric and Electrostatic Exclusion Effects, and Comparison with Experimental Data
http://hdl.handle.net/1956/4201
A Dense Gas Model of Combined Transports and Distributions of Solutes in the Interstitium including Steric and Electrostatic Exclusion Effects, and Comparison with Experimental Data
Justad, Sigrid Ravnsborg
Master thesis
Mathematical modeling of physiological fluid systems is regarded as an important tool for further understanding of circulatory properties. The interstitium is a fluid filled space outside the cells in the body, and all transport of substances in and out of cells must pass through the interstitium. It is therefore of great interest to obtain a better understanding of this fluid flow. Large substances, e.g.\ proteins and some therapeutic agents, are hindered due to their molecular size. They are excluded from a certain fraction of the fluid volume, and thus have a lower distribution volume in the interstitium. However, the molecular charge may have a strong influence on the exclusion phenomenon. This has been the motivation for the modeling in this thesis. First, it is derived a set of equations appropriate for modeling fluid flow through the interstitium. Furthermore, a thorough study of electrostatic properties of the interstitium has been performed.
Thu, 27 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/42012010-05-27T00:00:00ZA Multiscale Approach to Estimate Large Scale Flow and Leakage from Geological Storage
http://hdl.handle.net/1956/4198
A Multiscale Approach to Estimate Large Scale Flow and Leakage from Geological Storage
Elje, Elin Marie
Master thesis
Deep saline aquifers offer the greatest storage capacity for geological storage.
However, the formations might be extensive and because of the oil and gas legacy the aquifers are frequently perforated by abandoned wells. These wells becomes potential leakage pathways for the injected CO2. There might be as many as hundreds of thousands abandoned wells in a saline aquifer, which make obtaining accurate and robust estimates for the flow in these systems a major challenge.
In this thesis a multiscale approach have been used to couple a FEM well leakage model and a ELSA well leakage model, in order to achieve a multiscale model that would estimate the large scale flow and leakage from geological storage on extensive domains. In the search after a radius for the fine scale solver it was discovered that a well is hardly affected by the coarse scale solution, due to the radius of influence of a well. Hence, the derived model is not a multiscale model and is not able to estimate the flow in the system.
Tue, 01 Jun 2010 00:00:00 GMThttp://hdl.handle.net/1956/41982010-06-01T00:00:00ZFinite-Element Modelling of Buoyancy-Driven Flow in Natural Geothermal Systems
http://hdl.handle.net/1956/4194
Finite-Element Modelling of Buoyancy-Driven Flow in Natural Geothermal Systems
Saltnes, Marie Horn
Master thesis
Finite element modelling of buoyancy driven flow in geothermal systems is presented in this thesis. The main focus is on the development of a numerical modelling tool. The program developed is tested for classical benchmarks and correspondence with analytical values for the critical Rayleigh-Darcy numbers for onset of convection and Nusselt numbers for various scenarios is shown.
Thu, 27 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/41942010-05-27T00:00:00ZPunktprosessmodeller for linjetransektdata
http://hdl.handle.net/1956/4192
Punktprosessmodeller for linjetransektdata
Olsen, Håvard Goodwin
Master thesis
Oppgaven omhandler punktprosesser som modell for observasjoner ved en transektlinje. Hovedprosessen er en Markovmodulert Poissonprosess som tillater at punkter opptrer oftere i noen perioder enn andre, den er her begrenset til to tilstander. Det ses på oppbyggingen av denne prosessen og hvordan tilstandene kan estimeres. En anvendelse er på vågehval fra norske farvann, hvor det ses på parameterestimering og en undersøkelse av likelihoodverdier. Det blir også utført simuleringer for å sjekke tidligere teori og antagelser.
Tue, 27 Apr 2010 00:00:00 GMThttp://hdl.handle.net/1956/41922010-04-27T00:00:00Z4D Seismic History Matching Using the Ensemble Kalman Filter (EnKF): Possibilities and Challenges
http://hdl.handle.net/1956/4001
4D Seismic History Matching Using the Ensemble Kalman Filter (EnKF): Possibilities and Challenges
Fahimuddin, Abul
Doctoral thesis
This research endeavor presents a 4D seismic history matching work flow
based on the ensemble Kalman filter (EnKF) methodology. The objective
of this work is to investigate the sensitivity of different combinations of
production and seismic data on EnKF model updating. In particular, we
are interested to quantify the performance of EnKF-based model updating
experiments with respect to production and seismic data matching as well
as to estimate uncertain reservoir parameters, e.g., porosity and permeability.
The reservoir-seismic model system used consists of a commercial
reservoir simulator coupled to an implemented rock physics model and a
forward seismic modeling tool based on 1D convolution with weak contrast
reflectivity approximation. One of the challenging issues of using
4D seismic data into reservoir history matching is to compare the measured
data to the model data in a consistent way. Based on our realistic
synthetic reservoir characterization case, time-difference impedance data
generally performed better than time-difference amplitude data, and the
matching of seismic data mostly improved with the inclusion of seismic
data. In estimating posterior porosity and permeability, seismic difference
data provided better estimate than using only production data, especially
in aquifer region and also in areas that might be considered for in-fill wells.
We experienced that the integration of seismic data in the elastic domain
mostly provided better results than using seismic data at the amplitude
level. This may be due to the measurement error used, and hence, further
investigations are suggested to ascertain the appropriate level of seismic
data integration.
The reservoir simulation model used is a sector model based on a full
field North sea reservoir. The prior ensemble used consists of 100 model
realizations. For computational efficiency, wehave used efficient subspacebased
EnKF implementations to handle the effects of large data sets such
as 4D seismic. It may be difficult to assimilate 4D seismic data since it is
related to the model variable at two or more time instances. Hence, we
have used a combination of the EnKF and the ensemble Kalman smoother (EnKS) to condition the reservoir with seismic data.
We performed a thorough study on the effects of using large number
of measurements in EnKF by considering a single update of a very simple
linear model. The sensitivity of EnKF update for several parameters, e.g.,
model dimension, correlation length, and measurement error variance
also presented. We investigated the accuracy of the traditional covariance
estimate with a large number of measurements. We demonstrated that the
ensemble size has to be much larger than the number of measurements in
order to obtain an accurate solution, and that the problem becomes more
severe when the measurement uncertainty decreases, indicating that some
kind of localization may have to be applied more often than previously
believed.
In the real field case study, we have focused on matching the inverted
acoustic impedance ratio (monitor survey/base survey) data between two
time steps of several years of production. Note that for this real field case,
there is a long period of production before the seismic data was assimilated.
Hence, the porosity and permeability fields had a large influence induced
by production data before they were actually updated with seismic data.
Global and local analysis schemes assimilate production data and seismic
data respectively. In our implementation of local analysis, we used three
significant regions and seismic data within a given local analysis region is
influenced by only variables in the same region. The posterior ensemble
of models showed good match to both production data and seismic data.
In most of the cases of reservoir characterization, the combined use of 4D
seismic with production data improved history matching for the wells and
also improved posterior impedance ratio data matching. In addition, 4D
seismic data provided more information related to permeability update
in the aquifer and in-fill areas. The results indicate that the local analysis
reduced the amount of spurious correlations and tendencies to ensemble
collapse seen with global analysis.
Thu, 27 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/40012010-05-27T00:00:00ZInternal pressure gradient errors in sigma-coordinate ocean models: The finite volume and weighted approaches
http://hdl.handle.net/1956/3985
Internal pressure gradient errors in sigma-coordinate ocean models: The finite volume and weighted approaches
Pedersen, Helene Hisken
Master thesis
Terrain-following (sigma-coordinate) models are widely used. They are often advantageous when dealing with large variations in topography, and give an accurate representation of the bottom and top boundary layers. However, near steep topography, the use of these coordinates can lead to a large error in the internal pressure gradient force.
Using finite differences is the traditional way of discretising the equations. However, it is possible to integrate over horizontal cells by using a finite volume approach instead. In this work, we will interpret the traditional computation of the internal pressure in the Princeton Ocean Model (Blumberg and Mellor, 1987) as a finite volume method. We will include additional points in the computantional stencil and derive
higher order finite volume methods.
The standard 2nd order POM method will also be combined with the rotated grid approach from Thiem and Berntsen, 2006. We will investigate the possibility of using an 'optimal weighting', with weights that depend on the topography, the stratification, and the grid size. All methods will be applied to an idealised test case - the seamount problem.
Mon, 26 Apr 2010 00:00:00 GMThttp://hdl.handle.net/1956/39852010-04-26T00:00:00ZThe Stability and Instability of Solitary Waves
http://hdl.handle.net/1956/3984
The Stability and Instability of Solitary Waves
Nguyen, Nguyet Thanh
Doctoral thesis
Fri, 23 Apr 2010 00:00:00 GMThttp://hdl.handle.net/1956/39842010-04-23T00:00:00ZSigma coordinate pressure gradient errors and the basin problem
http://hdl.handle.net/1956/3975
Sigma coordinate pressure gradient errors and the basin problem
Ness, Borghild
Master thesis
This work will focus on how the internal pressure gradient errors affect the numerical instability in idealised ocean basins. The nature of the errors will be investigated and discussed. First, in Chapter 1, the ocean model and the underlying equations will be discussed. In Chapter 2, different forms of ocean basins will be studied. Experiments will show how the shapes affect the generation of false geostrophic flow in the basins. The results here will show the better basin to proceed with in the later experiments. Further on, in Chapter 3, the basin's ability to represent real physical phenomena will be tested. External and internal Kelvin waves will be used as a measure of how well the model works. The SESK will be found in Chapter 4, together with new errors. In Chapter 5 the model's sensitivity to the advection scheme will be tested. An overview of the results and ideas for future investigations are given in Chapter 6.
Thu, 15 Apr 2010 00:00:00 GMThttp://hdl.handle.net/1956/39752010-04-15T00:00:00ZNonhydrostatic pressure in ocean models with focus on wind driven internal waves
http://hdl.handle.net/1956/3960
Nonhydrostatic pressure in ocean models with focus on wind driven internal waves
Bergh, Jon
Doctoral thesis
Fri, 05 Feb 2010 00:00:00 GMThttp://hdl.handle.net/1956/39602010-02-05T00:00:00ZPyProp - A Python Framework for Propagating the Time Dependent Schrödinger Equation
http://hdl.handle.net/1956/3892
PyProp - A Python Framework for Propagating the Time Dependent Schrödinger Equation
Birkeland, Tore
Doctoral thesis
Fri, 18 Dec 2009 00:00:00 GMThttp://hdl.handle.net/1956/38922009-12-18T00:00:00ZRegresjonsmodeller i skipsforsikring
http://hdl.handle.net/1956/3871
Regresjonsmodeller i skipsforsikring
Schei, Ole Johan Sørensen
Master thesis
Tue, 15 Sep 2009 00:00:00 GMThttp://hdl.handle.net/1956/38712009-09-15T00:00:00ZBeskrivelse av skoleforsøket: Lagring av CO2 under havbunnen
http://hdl.handle.net/1956/3837
Beskrivelse av skoleforsøket: Lagring av CO2 under havbunnen
Nordbotten, Jan Martin; Rygg, Kristin
Working paper
Thu, 04 Mar 2010 09:10:22 GMThttp://hdl.handle.net/1956/38372010-03-04T09:10:22ZKlasseromsforsøk om lagring av CO2 under havbunnen
http://hdl.handle.net/1956/3836
Klasseromsforsøk om lagring av CO2 under havbunnen
Nordbotten, Jan Martin; Rygg, Kristin
Journal article
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/38362010-01-01T00:00:00ZElevar si grafiske forståing av derivasjon - Ei kvalitativ tilnærming
http://hdl.handle.net/1956/3801
Elevar si grafiske forståing av derivasjon - Ei kvalitativ tilnærming
Jørgensen, Jon Arild
Master thesis
Denne oppgåva handlar om elevar si grafiske forståing av derivasjon. Målet med oppgåva har vore å få ei oversikt over elevar sitt begrepsbilete i samband med ei grafisk framstilling av den deriverte, samt sjå på samanhengen mellom begrepsbilete og løysingsstrategiar i derivasjonsoppgåver der dei skal tolka den deriverte grafisk.
Derivasjon er eit stort emne for elevar som vel å fordjupa seg i matematikk på vidaregåande skule. For å forstå den deriverte til fulle er elevane avhengig av mellom anna begrep som funksjonar, grenseverdi, kontinuitet og stigningstal. Det er ikkje muleg å gå inn på alle delar
av elevar si forståing av derivasjon innafor ramma av eit hovudfagsarbeid. Eg har difor vald å gå nærmare inn i elevar si grafiske forståing av den deriverte.
Motivasjonen for å velja ei grafisk vinkling er at den deriverte kan tilnærmast på ein meir intuitiv måte enn den tradisjonelle grenseverditilnærminga, som er vald i læreplan og mange lærebøker. David Tall har forska mykje pa delle området. Bakgrunnen for arbeidet hans er Piaget og konstruktivismen. Det er også teorigrunnlaget i denne oppgåva.
Vidare er den kvalitative metode nytta som arbeidsreiskap i denne oppgåva. Bakgrunnen for det er at dei store kvantitative undersøkningane (til dømes TIMSS) seier meg lite om kvifor elevar svarer som dei gjer nAr dei løyser oppgåver. Eg har difor vald å intervjua seks elevar som arbeider med derivasjonsoppgåver og analysera desse intervjua grundig.
Resultata tyder på at elevane har god oversikt over den fyrstederiverte til ein funksjon. Dei har eit tilfredsstillande bilete av stigninga til grafen i eit punkt, og nyttar ofte tangentar for å
visualisera dette. Dei har likevel nokre kognitive konfliktar som gjer det vanskeleg å tolka den fyrstederiverte grafisk. Dei har også problem med å visualisera den andrederiverte. Vidare ser
ein at elevar som bar eit rikt begrepsbilete og mange kognitive einingar er meir fleksible når dei skal løysa oppgåver.
På bakgrunn av desse resultata kan det difor argumenterast for at det bør arbeidast meir med den grafiske tolkinga av den deriverte i klasserommet. Dei grafiske framstillingane og dei kognitive konfliktane elevane møter bør nyttast som springbrett mot ei høgare forståing av derivasjonsbegrepet. Det finst mange gratis, lett tilgjengelege digitale verktøy sam med fordel kan nyttast i undervisninga. Dette kan gjerast på ein måte som er lite arbeidskrevjande for
læraren, men likevel er med på å gje elevane eit rikare begrepsbilete.
Mon, 01 May 2006 00:00:00 GMThttp://hdl.handle.net/1956/38012006-05-01T00:00:00ZNumerical methods for modeling geothermal energy extraction
http://hdl.handle.net/1956/3773
Numerical methods for modeling geothermal energy extraction
Fosen, Anders Solstrand
Master thesis
Modeling of flow in porous media is an important scientific research area, and has been so for decades. It is also one of the major topics within applied mathematics. Models for flow in porous media are for example important in the oil industry, in groundwater hydrology and in geothermal energy extraction. In this thesis we are building both a mathematical and a numerical geothermal model. To understand the processes that happens in geothermal reservoirs far below the earth's surface, good models are needed. The long term reservoir behavior is important when the economical feasibility of a geothermal project is determined. Good models are needed to determine the long term behavior.
To model flow in porous media, there are several steps that needs to be done. The first step is to obtain and understand the background knowledge, such as theory from reservoir mechanics, that is needed to build a model. Some of this knowledge is common for all the different topics that use flow in porous media models. Other parts of the theory are more specific and connected to an application. When sufficient knowledge has been obtained, the next step is to use it to create a mathematical model for flow in porous media. When this has been done, it is time to implement a numerical model that is based on the mathematical model.
To obtain a numerical model, it is common to discretize the continuous model expressions in the mathematical model. We try to retain the essential properties of the continuous model expressions when we discretize them. Discretizing model expressions often leads to a linear system that can be solved by numerical equation solvers. The main focus in this thesis is the discretization of the equation terms, both spatial and in time. We will use a finite element method to spatially discretize the diffusion term in our model equations. A finite difference method will be used to discretize the advection term in space. An equation term can be solved with either explicit or implicit time discretization. When a term is solved explicitly it is solved at the start of each time step, using the the previous equation values. Solving the term implicitly, the term is calculated at the end of each time step. We will try to create an adaptive strategy that decides which terms that should be solved with explicit time discretization.
The thesis is split into 6 chapters. Chapter 1 will work as a background for the rest of the thesis, and is dedicated to geothermal energy extraction. As we build a model for geothermal energy extraction, it is important to have some knowledge of how a geothermal reservoir works. In Chapter 2 we will go through the theory from reservoir mechanics that is relevant for this thesis. We will explain the terms porous media, porosity, representative elementary volume, permeability, homogeneity, and isotropy. We will also explain Darcy's law and the general conservation law. At the end of the chapter we will look at the similarities and differences between the physical properties enthalpy and temperature.
The mathematical model is built on a local and a reservoir scale conservation law for enthalpy, and we create this model in Chapter 3. We see our reservoir as blocks of rock, with fractures that are filled with water between them. The local conservation law will model the heat transfer in one block and the fractures near it. To do this we will split the block up into layers, and...
Fri, 20 Nov 2009 00:00:00 GMThttp://hdl.handle.net/1956/37732009-11-20T00:00:00ZStanley-Reisner ringer med sykliske gruppevirkninger
http://hdl.handle.net/1956/3772
Stanley-Reisner ringer med sykliske gruppevirkninger
Loe, Kirsti Selliseth
Master thesis
Oppgaven tar for seg noen spesielle Stanley-Reisner idealer I i polynomringen S med n variable, der mengden av potensvektorene til de minimale generatorene for I er invariant under gruppevirkningen fra gruppen generert av den sykliske permutasjonen (1,...,n) på [n]. Vi ser på restklasseringen S/I og den minimale frie resolusjonen til denne. Vi undersøker også om S/I er Cohen-Macaulay.
Fri, 29 May 2009 00:00:00 GMThttp://hdl.handle.net/1956/37722009-05-29T00:00:00ZMultidimensional Fourier Transform on Sparse Grids
http://hdl.handle.net/1956/3771
Multidimensional Fourier Transform on Sparse Grids
Fjær, Sveinung
Master thesis
When working on multidimensional problems, the number of points needed when using a tensor product grid. This is known as the curse of dimensionality. In this thesis we propose a set of sparse grids, which can be used to dampen this curse. The grid is chosen such that we can apply the Fourier transform on functions sampeled on it. The thesis describes such an algorithm.
Mon, 01 Jun 2009 00:00:00 GMThttp://hdl.handle.net/1956/37712009-06-01T00:00:00ZRenal Function Estimation using Magnetic Resonance Images
http://hdl.handle.net/1956/3770
Renal Function Estimation using Magnetic Resonance Images
Schwinger, Eyram A. Kojo
Master thesis
This study is aimed at testing the use of Magnetic Resonance (MR) images and mathematical models for renal parameter estimation.
The study was based on four models; the Patlak model, Cortical Compartment model, Separable Compartment/Sourbron model and Deconvolution method.
The project included the mathematical derivation of the model. The models were then applied to whole Kidney MRI images in order to get and compare parameters. Part of the project also included the visualization of the parameters produced by the models on a voxel-by-voxel basis.
The project showed that of the four models the Deconvolution method produces the highest parameter values followed by the Patlak model. The Cortical Compartment and Sourbron models produce almost similar results. The voxel-by-voxel visualization also showed that only the renal cortex produces high flow results.
Tue, 02 Jun 2009 00:00:00 GMThttp://hdl.handle.net/1956/37702009-06-02T00:00:00ZAnalysis of left truncated data with an application to insurance data
http://hdl.handle.net/1956/3769
Analysis of left truncated data with an application to insurance data
Berentsen, Geir Drage
Master thesis
This thesis discusses different ways of analysing left truncated data when
the lower bound itself is a stochastic variable. We will consider the possible
dependence between the variable of interest and the truncating variable, and
how the dependency structure between these variables influence estimation of
the underlying distribution.
Tue, 02 Jun 2009 00:00:00 GMThttp://hdl.handle.net/1956/37692009-06-02T00:00:00ZStreamline based History Matching
http://hdl.handle.net/1956/3768
Streamline based History Matching
Sandve, Tor Harald
Master thesis
Streamline based reservoir simulation has been used with great success in the petroleum industry for decades. Fast computational speed together with the fact that sensitivities can be computed analytically along streamlines makes streamline based methods efficient for history matching problems. The key idea for streamline based methods is to approximate 3D fluid flow by a family of 1D transport equations along streamlines. This is done by introducing a time-of-flight coordinate variable. In this thesis a method called generalized travel-time inversion (GTTI) will be discussed for history matching. The main idea behind GTTI is to combine amplitude matching with travel-time matching. The inverse problem associated with the amplitude matching is fully nonlinear and can therefore give unstable and non-unique solutions, while the travel-time inversion has quasi-linear properties. GTTI is able to use all the data available and still preserve the quasi-linear properties of the travel-time inversion. Several authors has studied GTTI previously. In all these articles only the first order sensitivities are used in the history matching. Since the second order sensitivities can be computed along the streamlines at practically no cost, a method that includes information from the second order sensitivities will in theory show better convergence properties. Some synthetic examples shows that this is true for cases where almost no regularization is needed but not for more realistic and illposed problems. To further improve the convergence properties of the minimization a line search and a trust-region strategy is suggested. Both the line search strategy and the trust region method of Levenberg-Marquardt shows good performance. Both in the ability to reduce the objective function and in characterizing the permeability field. A method where the regularization parameters are chosen by the L-curve criteria is also considered and compared to the other methods. An advantage of this method is that the regularization term is reduced as the iterate enters into the neighborhood of the solution and less regularization is needed.
Thu, 07 May 2009 00:00:00 GMThttp://hdl.handle.net/1956/37682009-05-07T00:00:00ZSolvenskontroll ved skifting mellom ulike volatilitetsregimer
http://hdl.handle.net/1956/3767
Solvenskontroll ved skifting mellom ulike volatilitetsregimer
Reikerås, Eivind
Master thesis
Oppgåva handlar om utfordringar knytta til innføringa av Solvency 2.
Tue, 02 Jun 2009 00:00:00 GMThttp://hdl.handle.net/1956/37672009-06-02T00:00:00ZMultipliers of the Dirichlet space
http://hdl.handle.net/1956/3766
Multipliers of the Dirichlet space
Alsaker, Henning Abbedissen
Master thesis
The topic of the thesis is the Dirichlet space of analytic functions in the unit disk and multipliers of this space. We study the elementary properties of multipliers, characterizations of multipliers, univalent multipliers and the Banach algebra of multipliers.
Fri, 20 Nov 2009 00:00:00 GMThttp://hdl.handle.net/1956/37662009-11-20T00:00:00ZHøyfrekvens finans og markedets mikrostruktur på Oslo Børs
http://hdl.handle.net/1956/3765
Høyfrekvens finans og markedets mikrostruktur på Oslo Børs
Danielsen, Arne
Master thesis
Høyfrekvens finans, som er tema for denne
oppgaven, tar for seg studiet av høyfrekvente
finansielle data. I denne oppgaven har vi sett på
data med den høyest mulige frekvensen,
nemlig data fra hver eneste handel.
De høyfrekvente dataene forteller oss tidspunktet
for en handel og hva prisen var ved
handelen. Differensierer vi pris og tidspunkt,
får vi prisendringen y(i) og tidsavstanden tau(i)
som vi studerer nærmere. Disse dataene viser seg
å ha flere spesielle egenskaper. Bl.a.
viser det seg at prisendringene kun tar noen få
verdier. Det er en utfordring å finne en modell
som passer til slike data. I Bauwens et al.
(2008) nevnes det to tilnærminger til
dette problemet. Den ene løsningen er å se på
dataene som en brownsk bevegelse med
støy på. Den andre løsningen er å se på dataene
som strukturell informasjon.
Vi har i denne oppgaven sett mest på den siste
tilnærmingen,
og studert ACM-ACD-modellen foreslått av Russell
og Engle (2005).
I denne modellen tenker man seg at y(i-1) har en
effekt på tau(i), og at tau(i) har en effekt
på y(i). I denne oppgaven kommer vi frem til at
kun det siste er tilfelle. Ved å lage et
kryssdiagram finner vi ingen sammenheng mellom
y(i-1) og tau(i), og ved estimering av
ACM-ACD-modellen kommer vi frem til at y(i-1)
ikke forklarer tau(i). Vi har derfor i
denne oppgaven forenklet ACM-ACD-modellen noe.
Et problem med estimering av modellen er at vi
ikke kan være sikker på om
algoritmen finner et globalt maksimum.
Diagnostiske tester gir oss et svar på dette
spørsmålet, samtidig som det gir oss et svar på
hvor bra modellen passer til dataene,
men verken Russell og Engle (2005) eller Bauwens
et al. (2008) kommer frem til
gode metoder for å vurdere modellens tilpasning.
Problemet er i denne oppgaven løst ved
å estimere modellen med de reelle dataene,
simulere modellen med parameterverdiene
vi får ut, og deretter sammenligne de reelle og
de simulerte dataene.
Et interessant spørsmål i forbindelse med
høyfrekvente data er om det er mulig å
bruke dataene til å tjene penger. Her kommer
temaet triangelarbitrasje inn. Dersom vi
har to valutakurser USD/SEK og USD/NOK vil
valutakursen NOK/SEK være gitt
som forholdet mellom de to første. Holder ikke
dette vil vi ha en arbitasjemulighet. For
å modellere de tre tidsrekkene USD/SEK, USD/NOK
og NOK/SEK kan vi bruke en
kointegrasjonsmodell. Trapletti et al. (2002)
viser at vi har kointegrasjon i et datasett
med valutakursene USD/DEM, USD/JPY og DEM/JPY,
mens vi i denne oppgaven
påviser kointegrasjon i valutakursene USD/SEK,
USD/NOK og NOK/SEK.
Videre forsøker vi å tolke resultatene fra
kointegrasjonsmodellen og vi sammenligner
kointegrasjonsmodellens prediksjonsevne med en
VAR-modell.
Wed, 09 Sep 2009 00:00:00 GMThttp://hdl.handle.net/1956/37652009-09-09T00:00:00ZMonotonicity Conditions for Discretization of Parabolic Conservation Laws
http://hdl.handle.net/1956/3739
Monotonicity Conditions for Discretization of Parabolic Conservation Laws
Hvidevold, Hilde Kristine
Master thesis
In the recent years monotonicity of control volume methods for elliptic
equations has been studied. A discrete maximum principle is established in Keilegavlen et al. [18], and a set of monotonicity conditions on general quadrilateral grids has been derived in Nordbotten et al. [23]. Monotonicity criteria for parabolic equations have not yet been studied. We will therefore in this thesis extend the already existing monotonicity conditions for elliptic
equations to a set of conditions for parabolic equations. These conditions is derived under the assumption that the discrete maximum principle for
parabolic equations is the same as the principle for elliptic problem. It turns out that these conditions are stricter than the elliptic conditions.
Since the maximum principle for the time discrete parabolic equation is different from the principle for the elliptic equation, it may be necessary to reformulate the discrete maximum principle. It is not obvious how this shall be done. We will therefore discuss various formulations of time discrete maximum principles together with numerical examples.
Tue, 02 Jun 2009 00:00:00 GMThttp://hdl.handle.net/1956/37392009-06-02T00:00:00ZEffect of Erroneous Representation of Location and Orientation of EM Receivers for Monitoring of Submarine Petroleum Reservoirs
http://hdl.handle.net/1956/3738
Effect of Erroneous Representation of Location and Orientation of EM Receivers for Monitoring of Submarine Petroleum Reservoirs
Seyffarth, Hanne Christine
Master thesis
The receivers in electromagnetic monitoring should optimally be placed exactly in the same location during every single acquisition. However, for technological and economical reasons, the receivers are currently brought up to the surface to collect the measured data between each acquisition. This demands repeatedly replacement of the receivers, and uncertainties in the location and the orientation of the receivers will thus arise. In this thesis we will perform an investigation of the sensitivity of the electromagnetic signal with respect to an erroneous representation of the location and orientation of the receivers. This will give an indication of whether solutions with fixed receivers should be used in the future.
Tue, 02 Jun 2009 00:00:00 GMThttp://hdl.handle.net/1956/37382009-06-02T00:00:00ZSkadebeløpsmodellering i skadeforsikring
http://hdl.handle.net/1956/3737
Skadebeløpsmodellering i skadeforsikring
Holmedal, Hege Kristine Kristvik
Master thesis
I oppgaven modelleres skadegrader for skip med lognormalfordeling og Paretofordeling. I tillegg tar den for seg en ny sammenlått lognormal-Paretofordeling, og tilpasser de samme dataene til denne fordelingen. Skadegradene modelleres både med SME og GLM. I tillegg modelleres de også med venstretrunkerte versjoner av de nevnte fordelingene, da kun skader som er mer kostbare enn egenandelen blir innrapportert til forsikringsselskapene.
Mon, 01 Jun 2009 00:00:00 GMThttp://hdl.handle.net/1956/37372009-06-01T00:00:00ZModelling of Three-Phase Flow Functions for Applications in Enhanced Oil Recovery
http://hdl.handle.net/1956/3592
Modelling of Three-Phase Flow Functions for Applications in Enhanced Oil Recovery
Holm, Randi
Doctoral thesis
Fri, 22 May 2009 00:00:00 GMThttp://hdl.handle.net/1956/35922009-05-22T00:00:00ZImage Inpainting using Nonlinear Partial Differential Equations
http://hdl.handle.net/1956/3578
Image Inpainting using Nonlinear Partial Differential Equations
Holm, Randi
Master thesis
Mon, 23 May 2005 00:00:00 GMThttp://hdl.handle.net/1956/35782005-05-23T00:00:00ZMatching univalent functions and related problems of conformal mappings
http://hdl.handle.net/1956/3379
Matching univalent functions and related problems of conformal mappings
Grong, Erlend
Master thesis
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/1956/33792008-01-01T00:00:00ZAn Analysis of the Corner Velocity Interpolation as the Approximation Space in a Mixed Finite Element Method
http://hdl.handle.net/1956/3378
An Analysis of the Corner Velocity Interpolation as the Approximation Space in a Mixed Finite Element Method
Nome, Morten Andreas
Master thesis
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/1956/33782008-01-01T00:00:00Z