Department of Informatics
http://hdl.handle.net/1956/918
Sun, 01 May 2016 23:26:39 GMT2016-05-01T23:26:39ZEditing to Eulerian Graphs
http://hdl.handle.net/1956/11951
Editing to Eulerian Graphs
Dabrowski, Konrad K.; Golovach, Petr; van' t Hof, Pim; Paulusma, Daniël
Conference object
We investigate the problem of modifying a graph into a connected graph in which the degree of each vertex satisfies a prescribed parity constraint. Let ea, ed and vd denote the operations edge addition, edge deletion and vertex deletion respectively. For any S subseteq {ea,ed,vd}, we define Connected Degree Parity Editing (S) (CDPE(S)) to be the problem that takes as input a graph G, an integer k and a function delta: V(G) -> {0,1}, and asks whether G can be modified into a connected graph H with d_H(v) = delta(v)(mod 2) for each v in V(H), using at most k operations from S. We prove that (*) if S={ea} or S={ea,ed}, then CDPE(S) can be solved in polynomial time; (*) if {vd} subseteq S subseteq {ea,ed,vd}, then CDPE(S) is NP-complete and W-hard when parameterized by k, even if delta = 0. Together with known results by Cai and Yang and by Cygan, Marx, Pilipczuk, Pilipczuk and Schlotter, our results completely classify the classical and parameterized complexity of the CDPE(S) problem for all S subseteq {ea,ed,vd}. We obtain the same classification for a natural variant of the cdpe(S) problem on directed graphs, where the target is a weakly connected digraph in which the difference between the in- and out-degree of every vertex equals a prescribed value. As an important implication of our results, we obtain polynomial-time algorithms for Eulerian Editing problem and its directed variant. To the best of our knowledge, the only other natural non-trivial graph class H for which the H-Editing problem is known to be polynomial-time solvable is the class of split graphs.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/119512014-01-01T00:00:00ZConnecting Vertices by Independent Trees
http://hdl.handle.net/1956/11950
Connecting Vertices by Independent Trees
Basavaraju, Manu; Fomin, Fedor; Golovach, Petr; Saurabh, Saket
Conference object
We study the paramereteized complexity of the following connectivity problem. For a vertex subset U of a graph G, trees T1, . . . , Ts of G are completely independent spanning trees of U if each of them contains U , and for every two distinct vertices u, v ∈ U , the paths from u to v in T1, . . . , Ts are pairwise vertex disjoint except for end-vertices u and v. Then for a given s ≥ 2 and a parameter k, the task is to decide if a given n-vertex graph G contains a set U of size at least k such that there are s completely independent spanning trees of U . The problem is known to be NP-complete already for s = 2. We prove the following results: For s = 2 the problem is solvable in time 2O(k)nO(1). For s = 2 the problem does not admit a polynomial kernel unless NP ⊆ coNP /poly. For arbitrary s, we show that the problem is solvable in time f (s, k)nO(1) for some function f of s and k only.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/119502014-01-01T00:00:00ZLargest chordal and interval subgraphs faster than 2n
http://hdl.handle.net/1956/11779
Largest chordal and interval subgraphs faster than 2n
Bliznets, Ivan; Fomin, Fedor; Pilipczuk, Michal Pawel; Villanger, Yngve
Journal article
We prove that in a graph with n vertices, induced chordal and interval subgraphs with the maximum number of vertices can be found in time O(2λn) for some λ< 1. These are the first algorithms breaking the trivial 2nnO(1) bound of the brute-force search for these problems.
Sat, 22 Aug 2015 00:00:00 GMThttp://hdl.handle.net/1956/117792015-08-22T00:00:00ZNon-Constructivity in Kan Simplicial Sets
http://hdl.handle.net/1956/11729
Non-Constructivity in Kan Simplicial Sets
Bezem, Marc; Coquand, Thierry; Parmann, Erik
Journal article
We give an analysis of the non-constructivity of the following basic result: if X and Y are
simplicial sets and Y has the Kan extension property, then Y X also has the Kan extension
property. By means of Kripke countermodels we show that even simple consequences of this
basic result, such as edge reversal and edge composition, are not constructively provable. We
also show that our unprovability argument will have to be refined if one strengthens the usual
formulation of the Kan extension property to one with explicit horn-filler operations.
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/1956/117292015-01-01T00:00:00ZInvestigating Streamless Sets
http://hdl.handle.net/1956/11725
Investigating Streamless Sets
Parmann, Erik
Journal article
<p>In this paper we look at streamless sets, recently investigated by Coquand and Spiwack. A set is streamless if every stream over that set contain a duplicate. It is an open question in constructive mathematics whether the Cartesian product of two streamless sets is streamless.</p> <p>We look at some settings in which the Cartesian product of two streamless sets is indeed streamless; in particular, we show that this holds in Martin-Loef intentional type theory when at least one of the sets have decidable equality. We go on to show that the addition of functional extensionality give streamless sets decidable equality, and then investigate these results in a few other constructive systems.</p>
Thu, 01 Jan 2015 00:00:00 GMThttp://hdl.handle.net/1956/117252015-01-01T00:00:00ZCase Studies in Constructive Mathematics
http://hdl.handle.net/1956/11718
Case Studies in Constructive Mathematics
Parmann, Erik
Doctoral thesis
<p>The common theme in this thesis is the study of constructive provability: in particular we investigate aspects of ﬁnite sets and Kan simplicial sets from a constructive perspective. </p>
<p>There are numerous deﬁnitions of ﬁniteness which are classically equivalent but not constructively so. In other words, constructive mathematics is able to distinguish between more notions of ﬁniteness. We start by investigating some relationships between several ways of deﬁning ﬁniteness for sets of natural numbers. As a new result, we give strictly bounded a precise placement in a hierarchy of deﬁnitions of ﬁniteness. </p>
<p>We also investigate streamless sets, which constitutes another notion of ﬁnite- ness. Streamless sets require neither decidable equality nor that the set is a subset of an enumerable set, and they are as such more general than strictly bounded sets. It is an open problem whether the Cartesian product of two streamless sets is itself streamless. We show that this holds if at least one of the sets has decidable equality or is of bounded size. The problem remains open for the case where both streamless sets have undecidable equality and fail to be of bounded size. We also show that—in certain constructive systems—the addition of function extensionality makes equality within streamless sets decidable. </p>
<p>Another notion of ﬁniteness is Noetherian. Both streamless and Noetherian can be generalized to properties of binary relations, whereby such sets are those where equality is respectively streamless or Noetherian. We provide a proof that all Noetherian relations are streamless—notably, in a type system without inductively deﬁned equality. This result immediately entails that all Noetherian sets are streamless within that type system. </p>
<p>We proceed to investigate aspects of Kan simplicial sets, a notion coming from topology. Kan simplicial sets have recently caught the eye of the type theory community since they can be used to build models of Martin-L¨of type theory that validate the Univalence axiom. All known proofs of the following well-known theorem use classical logic: if simplicial sets X and Y are Kan simplicial sets then Y X is also a Kan simplicial set. This theorem plays an important role in the Kan simplicial set model of type theory. We investigate whether this theorem also holds constructively. The classical deﬁnition of the Kan property has at least two non-equivalent constructive interpretations, and we provide countermodels showing the constructive non-provability of the classical theorem above for both of these deﬁnitions of Kan simplicial sets. </p>
Fri, 22 Jan 2016 00:00:00 GMThttp://hdl.handle.net/1956/117182016-01-22T00:00:00ZGenerating software for MUB complementary sequence constructions
http://hdl.handle.net/1956/11670
Generating software for MUB complementary sequence constructions
Roodashty, Hanieh
Master thesis
This master thesis has been performed at the Department of Informatics, University of Bergen between February and November 2015. The work has been supervised by Professor Matthew G. Parker as a part of the research interest within complementary construction using mutually unbiased bases.
This project is an attempt in the line of the study to improve the set size of the complementary sequences while keeping the upper bound of PAPR as low as possible and also maintaining the pairwise distinguishability. To perform this task, seeding the recursive construction with optimal mutually-unbiased bases in dimension 2 and 3 was used in this study.
To use the MUB-based sequences in OFDM containing systems, we generated program codes that constructed distinct arrays and sequences for dimension 2 and 3 seeding by MUBs with and without linear offset. The codes were produced in MATLAB environment.
The codes for both dimensions have delivered satisfactory results. The results for lower iterations have also matched with the manually calculated values based on theory.
Various strategies were used to increase the software speed as well as to decrease the resource demand, but still to run the codes for higher iterations there needs advanced and professional computing solutions such as supercomputers.
It has been attempted to generate the codes with maximum possible flexibility so that they can be used for other dimensions with minor adjustments. The codes have also the capability of conversion to other programming languages.
Wed, 18 Nov 2015 00:00:00 GMThttp://hdl.handle.net/1956/116702015-11-18T00:00:00ZA 43k Kernel for Planar Dominating Set using Computer-Aided Reduction Rule Discovery
http://hdl.handle.net/1956/11669
A 43k Kernel for Planar Dominating Set using Computer-Aided Reduction Rule Discovery
Halseth, Johan Torås
Master thesis
In this thesis we explore the technique of Region
Decomposition for finding kernels for Planar Dominating Set.
We redefine some concepts used in earlier work, fixing some
ambiguities on the way. From those concepts we end up at
the 335k kernel upper bound from Alber et. We then build
on the results of Chen et al. to improve this upper bound to
55k.
In the last part of the thesis we make use of a computer
program to exhaustively search for reduced instances of
regions, to be used together with the Region Decomposition
technique. From the results from the computer program we
are able to conclude that any region used in a Region
Decomposition always can be reduced to an equivalent
region having 12 vertices or less, in addition to its to
endpoints. This let us arrive at a 43k kernel for Planar
Dominating Set.
Mon, 15 Feb 2016 00:00:00 GMThttp://hdl.handle.net/1956/116692016-02-15T00:00:00ZAxis patterning by BMPs: cnidarian network reveals evolutionary constraints
http://hdl.handle.net/1956/11601
Axis patterning by BMPs: cnidarian network reveals evolutionary constraints
Genikhovich, Grigory; Fried, Patrick; Prünster, M. Mandela; Schinko, Johannes B.; Gilles, Anna F.; Fredman, David; Meier, Karin; Iber, Dagmar; Technau, Ulrich
Journal article
BMP signaling plays a crucial role in the establishment of the dorso-ventral body axis in bilaterally symmetric animals. However, the topologies of the bone morphogenetic protein (BMP) signaling networks vary drastically in different animal groups, raising questions about the evolutionary constraints and evolvability of BMP signaling systems. Using loss-of-function analysis and mathematical modeling, we show that two signaling centers expressing different BMPs and BMP antagonists maintain the secondary axis of the sea anemone Nematostella. We demonstrate that BMP signaling is required for asymmetric Hox gene expression and mesentery formation. Computational analysis reveals that network parameters related to BMP4 and Chordin are constrained both in Nematostella and Xenopus, while those describing the BMP signaling modulators can vary significantly. Notably, only chordin, but not bmp4 expression needs to be spatially restricted for robust signaling gradient formation. Our data provide an explanation of the evolvability of BMP signaling systems in axis formation throughout Eumetazoa.
Sun, 01 Mar 2015 00:00:00 GMThttp://hdl.handle.net/1956/116012015-03-01T00:00:00ZEfficient CRISPR-Cas9-mediated generation of knockin human pluripotent stem cells lacking undesired mutations at the targeted locus
http://hdl.handle.net/1956/11600
Efficient CRISPR-Cas9-mediated generation of knockin human pluripotent stem cells lacking undesired mutations at the targeted locus
Merkle, Florian T.; Neuhausser, Werner M.; Santos, David; Valen, Eivind; Gagnon, James A.; Maas, Kristi; Sandoe, Jackson; Schier, Alexander F.; Eggan, Kevin
Journal article
The CRISPR-Cas9 system has the potential to revolutionize genome editing in human pluripotent stem cells (hPSCs), but its advantages and pitfalls are still poorly understood. We systematically tested the ability of CRISPR-Cas9 to mediate reporter gene knockin at 16 distinct genomic sites in hPSCs. We observed efficient gene targeting but found that targeted clones carried an unexpectedly high frequency of insertion and deletion (indel) mutations at both alleles of the targeted gene. These indels were induced by Cas9 nuclease, as well as Cas9-D10A single or dual nickases, and often disrupted gene function. To overcome this problem, we designed strategies to physically destroy or separate CRISPR target sites at the targeted allele and developed a bioinformatic pipeline to identify and eliminate clones harboring deleterious indels at the other allele. This two-pronged approach enables the reliable generation of knockin hPSC reporter cell lines free of unwanted mutations at the targeted locus.
Fri, 01 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/116002015-05-01T00:00:00ZFreeContact: Fast and free software for protein contact prediction from residue co-evolution
http://hdl.handle.net/1956/10995
FreeContact: Fast and free software for protein contact prediction from residue co-evolution
Kaján, László; Hopf, Thomas A.; Kalaš, Matúš; Marks, Debora S.; Rost, Burkhard
Journal article
Background: 20 years of improved technology and growing sequences now renders residue-residue contact constraints in large protein families through correlated mutations accurate enough to drive de novo predictions of protein three-dimensional structure. The method EVfold broke new ground using mean-field Direct Coupling Analysis (EVfold-mfDCA); the method PSICOV applied a related concept by estimating a sparse inverse covariance matrix. Both methods (EVfold-mfDCA and PSICOV) are publicly available, but both require too much CPU time for interactive applications. On top, EVfold-mfDCA depends on proprietary software.
Results: Here, we present FreeContact, a fast, open source implementation of EVfold-mfDCA and PSICOV. On a test set of 140 proteins, FreeContact was almost eight times faster than PSICOV without decreasing prediction performance. The EVfold-mfDCA implementation of FreeContact was over 220 times faster than PSICOV with negligible performance decrease. EVfold-mfDCA was unavailable for testing due to its dependency on proprietary software. FreeContact is implemented as the free C++ library “libfreecontact”, complete with command line tool “freecontact”, as well as Perl and Python modules. All components are available as Debian packages. FreeContact supports the BioXSD format for interoperability.
Conclusions: FreeContact provides the opportunity to compute reliable contact predictions in any environment (desktop or cloud).
Wed, 26 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/109952014-03-26T00:00:00ZIndependent Set on P5-free graphs, an empirical study
http://hdl.handle.net/1956/10946
Independent Set on P5-free graphs, an empirical study
Haug, Håvard Karim
Master thesis
We implement the recent polynomial time
algorithm for the independent set problem on
P5-free graphs, and study the performance of
this algorithm on graphs of size up to 50. Our
empirical results show that the algorithm is
probably much faster than its theoretical
running-time upperbound.
Fri, 20 Nov 2015 00:00:00 GMThttp://hdl.handle.net/1956/109462015-11-20T00:00:00ZParameterized Graph Modification Algorithms
http://hdl.handle.net/1956/10774
Parameterized Graph Modification Algorithms
Drange, Pål Grønås
Doctoral thesis
<p>Graph modification problems form an important class of algorithmic problems in computer science. In this thesis, we study edge modification problems towards classes related to chordal graphs, with the main focus on trivially perfect graphs and threshold graphs. We provide several new results in classical complexity, kernelization complexity, and subexponential parameterized complexity. In all cases we give positive and negative results—giving polynomial time algorithms as well as NP-hardness results, polynomial kernels as well as polynomial kernel impossibility results, and we give subexponential time algorithms, and show that many problems do not admit such algorithms unless the exponential time hypothesis fails.</p>
<p>Our main focus is on the subexponential time complexity of edge modification problems. For that to make sense, we first need to figure out whether or not we actually need super-polynomial time. We show that editing towards trivially perfect graphs, threshold graphs, and chain graphs are all NP-complete, resolving 15 year old open questions. When a problem is shown to be NP-complete, we study exactly how much exponential time is needed for an algorithm to solve it. We provide several subexponential time algorithms, for, e.g., editing towards chain graphs and threshold graphs, as well as completing towards trivially perfect graphs. We complement our results by showing that small alterations in the target graph classes yields much harder problems: Editing towards trivially perfect graphs and cographs is not possible in subexponential time unless the exponential time hypothesis fails.</p>
<p>A first step in our subexponential time algorithms, and an otherwise natural first step in dealing with NP-hard problems is offered by the toolbox of polynomial kernelization. In polynomial kernelizations, we are asked to design polynomial time compression algorithms that shrink the input instances to output instances bounded polynomially in a yes-solution. We provide polynomial kernels for all edge modification problems towards trivially perfect graphs, threshold graphs and chain graphs. In addition, we show that on bounded degree input graphs, we obtain polynomial kernels for any editing or deletion problem towards graph classes characterizable by a finite set of forbidden induced subgraphs. Finally, we show that we should not expect the same result for completion problems by proving that such a compression algorithm would imply the collapse of the polynomial hierarchy.</p>
Thu, 10 Dec 2015 00:00:00 GMThttp://hdl.handle.net/1956/107742015-12-10T00:00:00ZSecurity in cloud computing and virtual environments
http://hdl.handle.net/1956/10695
Security in cloud computing and virtual environments
Aarseth, Raymond
Master thesis
Cloud computing is a big buzzwords today. Just
watch the commercials on TV and I can promise
that you will hear the word cloud service at
least once. With the growth of cloud technology
steadily rising, and everything from cellphones
to cars connected to the cloud, how secure is
cloud technology? What are the caveats of using
cloud technology? And how does it all work? This
thesis will discuss cloud security and the
underlying technology called Virtualization to
better understand the security from a technical
point of view. We will show how some
vulnerabilities can be utilized to steal personal
information, and how an attacker can exploit
vulnerabilities to take control of a system.
Mon, 14 Sep 2015 00:00:00 GMThttp://hdl.handle.net/1956/106952015-09-14T00:00:00ZBioXSD: the common data-exchange format for everyday bioinformatics web services
http://hdl.handle.net/1956/10660
BioXSD: the common data-exchange format for everyday bioinformatics web services
Kalaš, Matúš; Puntervoll, Pål; Joseph, Alexandre; Bartaševičiūtė, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge
Journal article
<p>Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types.</p>
<p>Results: BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web.</p>
<p>Availability: The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community.</p>
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/106602010-01-01T00:00:00ZEDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats
http://hdl.handle.net/1956/10659
EDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats
Ison, Jon; Kalaš, Matúš; Jonassen, Inge; Bolser, Dan; Uludag, Mahmut; McWilliam, Hamish; Malone, James; Lopez, Rodrigo; Pettifer, Steve; Rice, Peter
Journal article
<p>Motivation: Advancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required.</p>
<p>Results: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations.</p>
<p>Availability: The latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl.</p>
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/106592013-01-01T00:00:00ZEfforts towards accessible and reliable bioinformatics
http://hdl.handle.net/1956/10658
Efforts towards accessible and reliable bioinformatics
Kalaš, Matúš
Doctoral thesis
<p>The aim of the presented work was contributing to making scientific computing more accessible, reliable, and thus more efficient for researchers, primarily computational biologists and molecular biologists. Many approaches are possible and necessary towards these goals, and many layers need to be tackled, in collaborative community efforts with well-defined scope. As diverse components are necessary for the accessible and reliable bioinformatics scenario, our work focussed in particular on the following:</p>
<p>In the BioXSD project, we aimed at developing an XML-Schema-based data format compatible with Web services and programmatic libraries, that is expressive enough to be usable as a common, canonical data model that serves tools, libraries, and users with convenient data interoperability.</p>
<p>The EDAM ontology aimed at enumerating and organising concepts within bioinformatics, including operations and types of data. EDAM can be helpful in documenting and categorising bioinformatics resources using a standard “vocabulary”, enabling users to find respective resources and choose the right tools.</p>
<p>The eSysbio project explored ways of developing a workbench for collaborative data analysis, accessible in various ways for users with various tasks and expertise. We aimed at utilising the World-Wide-Web and industrial standards, in order to increase compatibility and maintainability, and foster shared effort.</p>
<p>In addition to these three main contributions that I have been involved in, I present a comprehensive but non-exhaustive research into the various previous and contemporary efforts and approaches to the broad topic of integrative bioinformatics, in particular with respect to bioinformatics software and services. I also mention some closely related efforts that I have been involved in.</p>
<p>The thesis is organised as follows: In the Background chapter, the field is presented, with various approaches and existing efforts. Summary of results summarises the contributions of my enclosed projects – the BioXSD data format, the EDAM ontology, and the eSysbio workbench prototype – to the broad topics of the thesis. The Discussion chapter presents further considerations and current work, and concludes the discussed contributions with alternative and future perspectives.</p>
<p>In the printed version, the three articles that are part of this thesis, are attached after the Discussion and References. In the electronic version (in PDF), the main thesis is optimised for reading on a screen, with clickable cross-references (e.g. from citations in the text to the list of References) and hyperlinks (e.g. for URLs and most References). A PDF viewer with “back“ function is recommended.</p>
Thu, 19 Nov 2015 00:00:00 GMThttp://hdl.handle.net/1956/106582015-11-19T00:00:00ZTowards a Secure Framework for mHealth. A Case Study in Mobile Data Collection Systems
http://hdl.handle.net/1956/10652
Towards a Secure Framework for mHealth. A Case Study in Mobile Data Collection Systems
Gejibo, Samson Hussien
Doctoral thesis
<p>The rapid growth in the mobile communications technology and wide cellular coverage created an opportunity to satisfy the demand for low-cost health care solutions. Mobile Health (a.k.a. mHealth) is a promising health service delivery concept that utilizes mobile communications technology to bridge the gap between remotely and sparsely populated communities and health care providers. So far, several mHealth applications have been developed and deployed in the field. Among those, a digital information gathering and dissemination system using mobile devices is the main focus of this work. This type of mHealth system is called Mobile Data Collection System (MDCS). Although MDCS succeeds over traditional paper form based data collection; it has also brought unique challenges such as data security in mobile communications technology. Despite MDCS are often used to collect sensitive health-related data, more work was needed to address security issues like confidentiality, integrity, availability and authentication to secure sensitive health related information in storage, data exchange and processing.</p>
<p>When we began this work, Java ME enabled feature phones, that dominated the scene for a decade, were the choice of most MDCS. At that time, in collaboration with our partner project, we proposed a secure custom protocol. The protocol has been implemented, tested, and integrated into our reference MDCS. We have confirmed the flexibility of our secure solution by retrofitting the existing openXdata system with user authentication, secure storage and communication solutions by modifying only a few lines of code in the client-server application.</p>
<p>However, in the past few years, the explosion of new mobile platforms and cloudbased services became game changer in our work. The move from feature phones to smartphones brought to the table the need to reevaluate, redesign, and port our earlier secure solution to smartphones based MDCS by considering the unique features and challenges of both smart phone clients and cloud-based server-side deployments.</p>
<p>In this dissertation, we analyze the challenges in securing mobile data collection systems deployed in remote areas, in resources-constrained environment, and in low project budget settings. We present a flexible and secure framework that offers user authentication both online and off-line, secure mobile storage, secure communication, and secure cloud storage. Besides, the framework provides data integrity, user account and data recovery, and multi-user management and is designed to be easily integrated in existing MDCS with minimal effort. Although fundamental security issues are conceptually identical in both old feature phone and current smartphone based solutions, our framework and the proposed solutions address the unique aspects of both mobile platforms. We also discuss the solution we designed for older Java ME based devices, and how they are still relevant. For this work, we collaborated with the open-source MDCS, openXdata and Open Data Kit (ODK).</p>
Thu, 05 Nov 2015 00:00:00 GMThttp://hdl.handle.net/1956/106522015-11-05T00:00:00ZInteractive Visual Analysis of Streaming Data
http://hdl.handle.net/1956/10587
Interactive Visual Analysis of Streaming Data
Smestad, Geir
Master thesis
Interactive Visual Analysis (IVA) has proven to be a robust set of methods for visually exploring complex data sets and generating hypotheses from data. Datasets and techniques where the temporal aspect is central has been an important area of study, both for the visualization field in general and for research on IVA. However, the challenge of handling streaming data sources for the purposes of decision support and analysis in real time, has been given comparatively little attention. This thesis presents a summary of the visualization literature addressing time-oriented and streaming data, with emphasis on Interactive Visual Analysis and its related techniques. We then explain the contemporary distinction between real-time data monitoring and retrospective data analysis, explore challenges that occur when a human user attempts to visually analyze data in real time, and use these observations to extend the scope of IVA such that it can be used to analyze streaming data in real time.
Tue, 23 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/105872014-09-23T00:00:00ZThe genome sequence of Atlantic cod reveals a unique immune system
http://hdl.handle.net/1956/10530
The genome sequence of Atlantic cod reveals a unique immune system
Star, Bastiaan; Nederbragt, Alexander Johan; Jentoft, Sissel; Grimholt, Unni; Malmstrøm, Martin; Gregers, Tone Fredsvik; Rounge, Trine Ballestad; Paulsen, Jonas; Solbakken, Monica Hongrø; Sharma, Animesh; Wetten, Ola Frang; Lanzén, Anders; Winer, Roger; Knight, James; Vogel, Jan-Hinnerk; Aken, Bronwen; Andersen, Øivind; Lagesen, Karin; Tooming-Klunderud, Ave; Edvardsen, Rolf; Kirubakaran, G. Tina; Espelund, Mari; Nepal, Chirag; Previti, A. Christopher; Karlsen, Bård Ove; Moum, Truls; Skage, Morten; Berg, Paul Ragnar; Gjøen, Tor; Kuhl, Heiner; Thorsen, Jim; Malde, Ketil; Reinhardt, Richard; Du, Lei; Johansen, Steinar Daae; Searle, Steve; Lien, Sigbjørn; Nilsen, Frank; Jonassen, Inge; Omholt, Stig W; Stenseth, Nils Christian; Jakobsen, Kjetill Sigurd
Journal article
Atlantic cod (Gadus morhua) is a large, cold-adapted teleost that sustains long-standing commercial fisheries and incipient aquaculture. Here we present the genome sequence of Atlantic cod, showing evidence for complex thermal adaptations in its haemoglobin gene cluster and an unusual immune architecture compared to other sequenced vertebrates. The genome assembly was obtained exclusively by 454 sequencing of shotgun and paired-end libraries, and automated annotation identified 22,154 genes. The major histocompatibility complex (MHC) II is a conserved feature of the adaptive immune system of jawed vertebrates, but we show that Atlantic cod has lost the genes for MHC II, CD4 and invariant chain (Ii) that are essential for the function of this pathway. Nevertheless, Atlantic cod is not exceptionally susceptible to disease under natural conditions5. We find a highly expanded number of MHC I genes and a unique composition of its Toll-like receptor (TLR) families. This indicates how the Atlantic cod immune system has evolved compensatory mechanisms in both adaptive and innate immunity in the absence of MHC II. These observations affect fundamental assumptions about the evolution of the adaptive immune system and its components in vertebrates.
Thu, 01 Sep 2011 00:00:00 GMThttp://hdl.handle.net/1956/105302011-09-01T00:00:00ZJASPAR 2014: An extensively expanded and updated open-access database of transcription factor binding profiles
http://hdl.handle.net/1956/10452
JASPAR 2014: An extensively expanded and updated open-access database of transcription factor binding profiles
Mathelier, Anthony; Zhao, Xiaobei; Zhang, Allen W.; Parcy, François; Worsley-Hunt, Rebecca; Arenillas, David J.; Buchman, Sorana; Chen, Chih-yu; Chou, Alice; Ienasescu, Hans; Lim, Jonathan; Shyr, Casper; Tan, Ge; Zhou, Michelle; Lenhard, Boris; Sandelin, Albin; Wasserman, Wyeth W.
Journal article
JASPAR (http://jaspar.genereg.net) is the largest open-access database of matrix-based nucleotide profiles describing the binding preference of transcription factors from multiple species. The fifth major release greatly expands the heart of JASPAR—the JASPAR CORE subcollection, which contains curated, non-redundant profiles—with 135 new curated profiles (74 in vertebrates, 8 in Drosophila melanogaster, 10 in Caenorhabditis elegans and 43 in Arabidopsis thaliana; a 30% increase in total) and 43 older updated profiles (36 in vertebrates, 3 in D. melanogaster and 4 in A. thaliana; a 9% update in total). The new and updated profiles are mainly derived from published chromatin immunoprecipitation-seq experimental datasets. In addition, the web interface has been enhanced with advanced capabilities in browsing, searching and subsetting. Finally, the new JASPAR release is accompanied by a new BioPython package, a new R tool package and a new R/Bioconductor data package to facilitate access for both manual and automated methods.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/104522014-01-01T00:00:00ZSkaping av meirverdi gjennom opne data om kollektivtrafikk
http://hdl.handle.net/1956/10429
Skaping av meirverdi gjennom opne data om kollektivtrafikk
Bergheim, Livar
Master thesis
Tema i denne oppgåva er opne data, med ei avgrensing
mot kollektivtrafikkdata. Djupnestudien
er retta mot verksemda til Skyss, det fylkeskommunale
kollektivtrafikkselskapet i Hordaland.
Omgrepet opne data" inneber at andre har løyve til å
vidarebruke dei i applikasjonar for
å nå dei ideelle målsetjingane som innovasjon, nyskaping,
gjennomsiktighet og effektivisering.
Utfordringane med opne data i denne typen verksemd er
kompleksiteten av dataelement ordna
i tid og rom. Samtidig er data dynamiske og nokre datasett
endrar seg i sanntid, t.d. prognosar
for kor mange minutt det er til dei neste bussane kjem til
ein gitt haldeplass.
I oppgåva reiser eg spørsmål om kva kvalitetar eit
programmeringsgrensesnitt (API) bør ha
med omsyn til å lette tilgangen til opne data og
funksjonaliteten knytt til desse.
Mitt standpunkt er som følgje av studien at web-API som
følgjer REST-arkitekturstilen, er
meir fleksible og lettare å ta i bruk og integrere med data
frå andre kjelder og andre applikasjonar.
Problemstillinga i oppgåva er todelt. For det første, korleis
ein kan gjere tilgjengeleg kollektivtrafikkdata
for vidarebruk på ein god måte. For det andre, i kva grad
kollektivtrafikkdata
let seg utnytte, eventuelt i kombinasjon med data frå andre
kjelder, til å lage nye applikasjonar/tenester
av verdi.
Gjennom drøftingar og utvikla app har eg vist døme på:
1) At å gjere kollektivtrafikkdata tilgjengeleg som opne data
kan gje meirverdi i form av nye
appar og tenester, innsyn og effektivisering.
2) Gjennom eit web-API er det relativt enkelt for utviklarar
med rett dugleik å bruke kunnskap
og kreativitet til nye applikasjonar.
3) Korleis ein kan legge til rette for eit godt web-API som
gjer det mogeleg og lettare for utviklarar
å bruke data.
Det har skjedd mykje dei siste åra når det gjeld teknologi
og kva data ein har om kollektivtrafikk.
Dette kan bidra til meir bruk av kollektivtrafikk, noko som er
miljømessig og politisk
ynskt.
Mon, 04 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/104292015-05-04T00:00:00ZSMS One-Time Passwords, Security in Two-Factor Authentication
http://hdl.handle.net/1956/10426
SMS One-Time Passwords, Security in Two-Factor Authentication
Eide, Jan-Erik Lothe
Master thesis
In the past decade, the low price and ease of generating and sending large amounts of SMS have made it possible for many online services to create strong and affordable authentication systems. With the growth of smartphones on the market, authentication systems that use mobile phones have lost some of their security. These systems rely on mobile phones being independent, separated from personal computers. This thesis investigates weaknesses in authentication systems that sends vital information to mobile phones via SMS. We will show that services that rely on this type of authentication are vulnerable to attack. The intended audience for this thesis are computer scientists, professional and amateur software developers, but anyone with basic IT knowledge is encouraged to keep reading.
Fri, 29 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/104262015-05-29T00:00:00ZImplementasjon av attributtbasert tilgangskontroll i elektroniske helsesystemer
http://hdl.handle.net/1956/10425
Implementasjon av attributtbasert tilgangskontroll i elektroniske helsesystemer
Kristensen, André Sæ
Master thesis
Tilgangskontroll er et av de viktigste temaene innenfor informasjonssikkerhet[10]. Sensitiv data bør bare kunne aksesseres av autoriserte brukere eller programmer. Denne oppgaven har som hovedmål å undersøke den nåværende tilgangskontrollen i to utbredte elektroniske helsesystemer: Open Medical Record System (OpenMRS) og Open Data Kit Aggregate (ODK Aggregate). Videre skal det utforskes muligheten for å implementere en mer fleksibel tilgangskontroll i overnevnte systemer med bruk av Attribute Based Access Control (ABAC). Før undersøkelsen blir utført skal det kartlegges eksisterende tilgangsmekanismer med deres fordeler og ulemper. Konklusjonen av arbeidet er at implementasjon av ABAC i de overnevnte systemene ikke kan gjøres på en optimal måte uten å måtte redesigne dem. Deler av den eksisterende tilgangskontrollen må derfor beholdes, og bare en begrenset ABAC kan implementeres. Dette er fordi systemene er utviklet for en annen type tilgangskontroll og det er blitt tatt valg under utviklingen som gjør at tilgangskontrollen er for tett integrert i applikasjonen. Basert på erfaringer med OpenMRS og ODK Aggregate ble det laget et elektronisk helsesystem med navnet Medical, der ABAC ble tatt med fra starten og implementert på en optimal måte. Medical prosjektet er et Proof of Concept (PoC) prosjekt som beviser hvor fleksibelt attributtbasert tilgangskontroll kan være når det blir tatt med i designfasen.
Thu, 28 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/104252015-05-28T00:00:00ZA Survey of Linear-Programming Guided Branching Parameterized Algorithms for Vertex Cover, with Experimental Results
http://hdl.handle.net/1956/10402
A Survey of Linear-Programming Guided Branching Parameterized Algorithms for Vertex Cover, with Experimental Results
Urhaug, Tobias Sørensen
Master thesis
A survey of FPT algorithms for Vertex Cover, parameterized by an above guarantee parameter.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/104022015-06-01T00:00:00ZMaximum number of objects in graph classes.
http://hdl.handle.net/1956/10394
Maximum number of objects in graph classes.
Hellestø, Marit Kristine Astad
Master thesis
The focus of this thesis is the study and
implementation of two exact exponential time
algorihms. These algorihms finds and lists the
number of minimal dominating sets and the number
of minimal subset feedback vertex sets in chordal
and split graphs.
Specifically the goal of this thesis is to study
the upper and lower bounds on the number of
minimal dominating sets and minimal subset
feedback vertex sets in chordal and split graphs,
to see whether it is possible to improve these
bounds. In fact it will be shown that there
exists a better lower bound on the number of
minimal subset feedback vertex sets in split
graphs.
Sun, 31 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/103942015-05-31T00:00:00ZProjective Simulation compared to reinforcement learning
http://hdl.handle.net/1956/10391
Projective Simulation compared to reinforcement learning
Bjerland, Øystein Førsund
Master thesis
This thesis explores the model of projective simulation
(PS), a novel approach for an artificial intelligence (AI)
agent.
The model of PS learns by interacting with the
environment it is situated in, and allows for simulating
actions before real action is taken. The action selection
is based on a random walk through the episodic &
compositional memory (ECM), which is a network of
clips that represent previous experienced percepts. The
network takes percepts as inputs and returns actions.
Through the rewards from the environment, the clip
network will adjust itself dynamically such that the
probability of doing the most favourable action (.i.e most
rewarded) is increased in similar subsequent situations.
With a feature called generalisation, new internal clips
can be created dynamically such that the network will
grow to a multilayer network, which improves the
classification and grouping of percepts.
In this thesis the PS model will be tested on a large and
complex task, learning to play the classic Mario platform
game. Throughout the thesis the model will be
compared to the typical reinforcement algorithms (RL)
algorithms, Q-Learning and SARSA, by means of
experimental simulations.
A framework for PS was built for this thesis, and games
used in the previous papers that introduced PS were
used to validate the correctness of the framework.
Games are often used as a benchmark for learning
agents, a reason is that the rules of the experiment are
already defined and the evaluation can easily be
compared to human performance. The games that will
be used in this thesis are: The Blocking game, Mountain
Car, Pole Balancing and, finally, Mario. The results show
that the PS model is competitive to RL for complex
tasks, and that the evolving network will improve the
performance.
A quantum version of the PS model has recently been
proven to realise a quadratic speed-up compared to the
classical version, and this was one of the primary
reasons for the introduction of the PS model. This
quadratic speed-up is very promising as training AI is
computationally heavy and requires a large state space.
This thesis will, however, consider only the classical
version of the PS model.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/103912015-06-01T00:00:00ZChoice of parameter for DP-based FPT algorithms: four case studies
http://hdl.handle.net/1956/10368
Choice of parameter for DP-based FPT algorithms: four case studies
Sæther, Sigve Hortemo
Doctoral thesis
<p>This thesis studies dynamic programming algorithms and structural parameters used when solving computationally hard problems. In particular, we look at algorithms that make use of structural decompositions to overcome difficulties of solving a problem, and find alternative runtime parameterizations for some of these problems.</p>
<p>The algorithms we look at make use of branch decompositions to guide the algorithm when doing dynamic programming. Algorithms of this type comprise of two parts; the first part computes a decomposition of the input, and the second part solves the given problem by dynamic programming over the computed decomposition. By altering what properties of an input instance these decompositions should exploit, the runtime of the complete algorithm will change. We look at four cases where altering the structural properties of the decomposition (i.e., changing what width measure for the decomposition to minimize), is used to improve an algorithm.</p>
<p>The first case looks at using branch decompositions of low maximum matchingwidth (mm-width) instead of tree-decompositions of low treewidth when solving Dominating Set. The result of this is an algorithm that is faster than the treewidth-algorithms on instances where the treewidth is at least 1.55 times the mm-width.</p>
<p>In the second case, we look at using branch decompositions of low splitmatching- width (sm-width) for cases when using tree-decompositions or kexpressions will not do. This study leads to new tractability results for Hamiltonian Cycle, Edge Dominating Set, Chromatic Number, and MaxCut for a class of dense graphs.</p>
<p>For the third case, we look at using branch decompositions of low Q-rank-width as an alternative to using branch decompositions of low rank-width for solving a large class of problems definable as [σ,ρ]-partition problems. This class consists of many domination-type problems such as Dominating Set and Independent Set. One of the results of using such an alternative branch decompositions is that we get an improved worst case runtime for Dominating Set parameterized by the clique-width cw; namely O ∗((cw)O(cw)) over the previous best O ∗(2O((cw)2)).</p>
<p>The fourth case looks at using branch decompositions of low projectionsatisfiable- width (ps-width) for solving #SAT and MaxSAT on CNF formulas. We define the notion of having low ps-width and show that by using a dynamic programming algorithm that makes use of the ps-width of a branch decomposition, we get new tractability results for #SAT and MaxSAT, and a framework unifying many previous tractability results.</p>
<p>We also show that deciding boolean-width of a graph is NP-hard and deciding mim-width of a graph is W[1]-hard. In fact, under the assumption NP =ZPP, we show that we cannot approximate mim-width to within a constant factor in polynomial time.</p>
Mon, 07 Sep 2015 00:00:00 GMThttp://hdl.handle.net/1956/103682015-09-07T00:00:00ZExact algorithms for MAX-2SAT and MAX-3SAT via multidimensional matrix multiplication
http://hdl.handle.net/1956/10348
Exact algorithms for MAX-2SAT and MAX-3SAT via multidimensional matrix multiplication
Petkevich, Yevgheni
Master thesis
In this thesis it is showed how an $O(n^{4-\epsilon})$ algorithm for the cube multiplication problem (that is defined in the thesis) would imply a faster than naive $O^{*}(2^{n(1-\frac{\epsilon}{4})})$ algorithm for the MAX-3SAT problem; this algorithm for MAX-3SAT is a generalization of the algorithm for the MAX-2SAT problem which was proposed by Ryan Williams; and cube multiplication, in turn, is defined as a generalization of the matrix multiplication problem for three-dimensional arrays. Approaches to find a faster than naive algorithm for cube multiplication are considered. Though no such algorithm was found using these approaches, it is showed how a variant of the Strassen algorithm for matrix multiplication could be found using the same approaches. Implementations of these approaches using computer programming and results of computational experiments are discussed.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/103482015-06-01T00:00:00ZLocalizing Cell Towers from Crowdsourced Measurements
http://hdl.handle.net/1956/10302
Localizing Cell Towers from Crowdsourced Measurements
Rusvik, Johan Alexander Nordstrand
Master thesis
Today, several internet sites exist that aim to provide the locations and number of cellular network antennas worldwide. For example [1],[2] and [3]. What makes this task difficult to accomplish is the lack of information available about the whereabouts and number of antennas. Only in a few countries are correct locations for some cellular network antennas known. Otherwise, these sites base their knowledge about cellular network antenna locations on measurement data collected from crowdsourcing. OpenCellID uses a simple and primitive algorithm for estimating antenna locations based on such measurements. In this thesis we suggest an alternative approach to localize cellular network antennas based on data provided by OpenCellID. We start by giving an introduction to the problem, and give a brief overview of related work. This includes localization of mobile devices in addition to localization of cellular network antennas. We then present some background information for our algorithm development. Next we develop two similar algorithms for localizing cellular network antennas. One utilizes distance between measurements, the other utilizes Received Signal Strength (RSS) values among measurements. We experiment with the two algorithms on theoretical generated test data, and argue that utilizing RSS gives the most accurate estimated antenna locations. Next we present the OpenCellID data. We explore this data in detail before defining two subsets we will test our two algorithms on. One subset contains measurement data where correct antenna locations are known. The other contains measurement data for antennas in the Bergen City Center area. We then estimate cellular network antenna locations with our two algorithms for the two subsets. Our tests will show that utilizing RSS estimates more accurate antenna locations when correct antenna locations are known and can be compared to. We end the thesis by analyzing two measurement distribution patterns, and propose how the algorithms can be improved.
Mon, 01 Jun 2015 00:00:00 GMThttp://hdl.handle.net/1956/103022015-06-01T00:00:00ZA Model of Type Theory in Cubical Sets
http://hdl.handle.net/1956/10215
A Model of Type Theory in Cubical Sets
Bezem, Marcus Aloysius; Coquand, Thierry; Huber, Simon
Journal article
We present a model of type theory with dependent product, sum, and identity, in cubical sets. We describe a universe and explain how to transform an equivalence between two types into an equality. We also explain how to model propositional truncation and the circle. While not expressed internally in type theory, the model is expressed in a constructive metalogic. Thus it is a step towards a computational interpretation of Voevodsky's Univalence Axiom.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/102152014-01-01T00:00:00ZMaximum number of edges in graph classes under degree and matching constraints
http://hdl.handle.net/1956/9951
Maximum number of edges in graph classes under degree and matching constraints
Måland, Erik Kvam
Master thesis
In extremal graph theory, we ask how large or
small a property of a graph can be, when the
graph has to satisfy certain constraints.
In this thesis, we ask how many edges a graph can
have with restrictions on its degree and matching
number, when the graph belongs to a given graph
class.
The solutions on general graphs and bipartite
graphs are known.
We present here the solution on split graphs,
disjoint union of split graphs and unit interval
graphs.
This is related to the determining the Ramsey
numbers of certain graph classes.
In addition, we present a characterization of
factor-critical chordal graphs in terms of
spanning subgraphs.; I ekstremal grafteori spør vi hvor stor eller
liten en grafs egenskap kan være, gitt at den må
tilfredsstille visse betingelser.
In denne oppgaven, spør vi hvor mange kanter en
graf kan ha når det blir satt restriksjoner på
dens grad og matching tall, og grafen må tilhøre
en gitt grafklasse.
Løsningen på generelle og bipartite grafer er
allerede kjent.
Vi presenterer her løsningen på split grafer,
disjunkt union av split grafer og enhetsintervall
grafer.
Dette er relatert til å bestemme Ramsey tall på
en spesiell samling av grafer.
I tillegg gir vi en karakterisering av faktor-
kritiske kordale grafer ved utspennende
subgrafer.
Tue, 12 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/99512015-05-12T00:00:00ZCommunity Detection in Social Networks
http://hdl.handle.net/1956/9944
Community Detection in Social Networks
Fasmer, Erlend Eindride
Master thesis
Social networks usually display a hierarchy of communities
and it is the task of community detection algorithms to
detect these communities and preferably also their
hierarchical relationships. One common class of such
hierarchical algorithms are the agglomerative algorithms.
These algorithms start with one community per vertex in
the network and keep agglomerating vertices together to
form increasingly larger communities. Another common
class of hierarchical algorithms are the divisive algorithms.
These algorithms start with a single community comprising
all the vertices of the network and then split the network
into several connected components that are viewed as
communities.
We start this thesis by giving an introductory overview of
the field of com- munity detection in part I, including
complex networks, the basic groups of com- munity
definitions, the modularity function, and a description of
common com- munity detection techniques, including
agglomerative and divisive algorithms.
Then we proceed, in part II, with community detection
algorithms that have been implemented and tested, with
refined use of data structures, as part of this thesis. We
start by describing, implementing and testing against
benchmark graphs the greedy hierarchical agglomerative
community detection algorithm proposed by Aaron Clauset,
M. E. J. Newman, and Cristopher Moore in 2004 in the
article Finding community structure in very large networks
[5]. We continue with describing and implementing the
hierarchical divisive algorithm proposed by Filippo
Radicchi, Claudio Castellano, Federico Cecconi, Vittorio
Loreto, and Domenico Parisi in 2004 in the article Defining
and identifying communities in networks [28]. Instead of
testing this algorithm against benchmark graphs we
present a community detection web service that runs the
algorithm by Radicchi et al. on the collaboration networks
in the DBLP database of scientific publi- cations and co-
authorships in the area of computer science. We allow the
user to freely set the many parameters that we have
defined for this algorithm. The final judgment on the results
is measured by the modularity value or can be left to the
knowledgeable user. A rough description of the design of
the algorithms and of the web service is given, and all code
is available at GitHub [10] [9].
Lastly, a few improvements both to the algorithm by
Radicchi et al. and to the web service are presented.
Fri, 01 May 2015 00:00:00 GMThttp://hdl.handle.net/1956/99442015-05-01T00:00:00ZFast Method for Maximum-Flow Problem with Minimum-Lot Sizes
http://hdl.handle.net/1956/9903
Fast Method for Maximum-Flow Problem with Minimum-Lot Sizes
Ganeshan, Vithya
Master thesis
In transportation networks, such as pipeline networks for transporting natural gas, it is often impractical to send across amounts of flow below a certain threshold. Such lower threshold is referred as the minimum-lot size. The network flow is semi-continuous when a network has minimum-lot sizes. In other terms, the flow can be either zero or within the limits of minimum-lot and maximum capacity of the network. When a network includes minimum-lot constraints, the problem of finding maximum-flow becomes complex and exact methods tend to be too time consuming. Since it is not generally required that the solution methods provide the optimal solution, this master thesis proposes a fast (inexact) method to find near-optimum solution.Also, the proposed fast method is experimentally validated and compared with the other relevant approaches available in the literature, and the results are analyzed in detail.
Tue, 03 Mar 2015 00:00:00 GMThttp://hdl.handle.net/1956/99032015-03-03T00:00:00ZMolecular mechanisms of adaptation emerging from the physics and evolution of nucleic acids and proteins
http://hdl.handle.net/1956/9885
Molecular mechanisms of adaptation emerging from the physics and evolution of nucleic acids and proteins
Goncearenco, Alexander; Ma, Binguang; Berezovsky, Igor
Journal article
DNA, RNA and proteins are major biological macromolecules that coevolve and adapt to environments as components of one highly interconnected system. We explore here sequence/structure determinants of mechanisms of adaptation of these molecules, links between them, and results of their mutual evolution. We complemented statistical analysis of genomic and proteomic sequences with folding simulations of RNA molecules, unraveling causal relations between compositional and sequence biases reflecting molecular adaptation on DNA, RNA and protein levels. We found many compositional peculiarities related to environmental adaptation and the life style. Specifically, thermal adaptation of protein-coding sequences in Archaea is characterized by a stronger codon bias than in Bacteria. Guanine and cytosine load in the third codon position is important for supporting the aerobic life style, and it is highly pronounced in Bacteria. The third codon position also provides a tradeoff between arginine and lysine, which are favorable for thermal adaptation and aerobicity, respectively. Dinucleotide composition provides stability of nucleic acids via strong base-stacking in ApG dinucleotides. In relation to coevolution of nucleic acids and proteins, thermostability-related demands on the amino acid composition affect the nucleotide content in the second codon position in Archaea.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/98852014-01-01T00:00:00ZCombining Aspect-Oriented and Strategic Programming
http://hdl.handle.net/1956/9807
Combining Aspect-Oriented and Strategic Programming
Kalleberg, Karl Trygve; Visser, Eelco
Journal article
Properties such as logging, persistence, debugging, tracing, distribution, performance monitoring and exception handling occur in most programming paradigms and are normally very difficult or even impossible to modularizewith traditional modularization mechanisms because they are cross-cutting. Recently, aspect-oriented programming has enjoyed recognition as a practical solution for separating these concerns. In this paper we describe an extension to the Stratego term rewriting language for capturing such properties. We show our aspect language offers a concise, practical and adaptable solution for dealing with unanticipated algorithm extension for forward data-flow propagation and dynamic type checking of terms. We briefly discuss some of the challenges faced when designing and implementing an aspect extension for and in a rule-based term rewriting system.
Sun, 01 Jan 2006 00:00:00 GMThttp://hdl.handle.net/1956/98072006-01-01T00:00:00ZInterfacing concepts: Why declaration style shouldn't matter
http://hdl.handle.net/1956/9805
Interfacing concepts: Why declaration style shouldn't matter
Bagge, Anya Helene; Haveraaen, Magne
Journal article
<p>A concept (or signature) describes the interface of a set of abstract types by listing the operations that should be supported for those types. When implementing a generic operation, such as sorting, we may then specify requirements such as “elements must be comparable” by requiring that the element type models the Comparable concept. We may also use axioms to describe behaviour that should be common to all models of a concept.</p>
<p>However, the operations specified by the concept are not always the ones that are best suited for the implementation. For example, numbers and matrices may both be addable, but adding two numbers is conveniently done by using a return value, whereas adding a sparse and a dense matrix is probably best achieved by modifying the dense matrix. In both cases, though, we may want to pretend we're using a simple function with a return value, as this most closely matches the notation we know from mathematics. This paper presents two simple concepts to break the notational tie between implementation and use of an operation: functionalisation, which derives a set of canonical pure functions from a procedure; and mutification, which translates calls using the functionalised declarations into calls to the implemented procedure.</p>
Fri, 17 Sep 2010 00:00:00 GMThttp://hdl.handle.net/1956/98052010-09-17T00:00:00ZHybrid visibility compositing and masking for illustrative rendering
http://hdl.handle.net/1956/9802
Hybrid visibility compositing and masking for illustrative rendering
Bruckner, Stefan; Rautek, Peter; Viola, Ivan; Roberts, Mike; Sousa, Mario Costa; Gröller, M. Eduard
Journal article
In this paper, we introduce a novel framework for the compositing of interactively rendered 3D layers tailored to the needs of scientific illustration. Currently, traditional scientific illustrations are produced in a series of composition stages, combining different pictorial elements using 2D digital layering. Our approach extends the layer metaphor into 3D without giving up the advantages of 2D methods. The new compositing approach allows for effects such as selective transparency, occlusion overrides, and soft depth buffering. Furthermore, we show how common manipulation techniques such as masking can be integrated into this concept. These tools behave just like in 2D, but their influence extends beyond a single viewpoint. Since the presented approach makes no assumptions about the underlying rendering algorithms, layers can be generated based on polygonal geometry, volumetric data, point-based representations, or others. Our implementation exploits current graphics hardware and permits real-time interaction and rendering.
Sun, 01 Aug 2010 00:00:00 GMThttp://hdl.handle.net/1956/98022010-08-01T00:00:00ZAxiom-Based Transformations: Optimisation and Testing
http://hdl.handle.net/1956/9801
Axiom-Based Transformations: Optimisation and Testing
Bagge, Anya Helene; Haveraaen, Magne
Journal article
Programmers typically have knowledge about properties of their programs that
aren't explicitly expressed in the code properties that may be very useful for,
e.g., compiler optimisation and automated testing. Although such information is
sometimes written down in a formal or informal specification, it is generally not
accessible to compilers and other tools. However, using the idea of concepts and
axioms in the upcoming C++ standard, we may embed axioms with program code.
In this paper, we sketch how such axioms can be interpreted as rewrite rules and test
oracles. Rewrite rules together with user-defined transformation strategies allow us
to implement program or library-specific optimisations.
Sat, 10 Oct 2009 00:00:00 GMThttp://hdl.handle.net/1956/98012009-10-10T00:00:00ZCurrent Trends for 4D Space-Time Topology for Semantic Flow Segmentation
http://hdl.handle.net/1956/9796
Current Trends for 4D Space-Time Topology for Semantic Flow Segmentation
Matković, Krešimir; Lež, Alan; Hauser, Helwig; Pobitzer, Armin; Theisel, Holger; Kuhn, Alexander; Otto, Mathias; Peikert, Ronald; Schindler, Benjamin; Fuchs, Raphael
Journal article
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/97962011-01-01T00:00:00ZA Diagrammatic Logic for Object-Oriented Visual Modeling
http://hdl.handle.net/1956/9728
A Diagrammatic Logic for Object-Oriented Visual Modeling
Diskin, Zinovy; Wolter, Uwe Egbert
Journal article
Formal generalized sketches is a graph-based specification format that borrows its main ideas from categorical and ordinary first-order logic, and adapts them to software engineering needs. In the engineering jargon, it is a modeling language design pattern that combines mathematical rigor and appealing graphical appearance. The paper presents a careful motivation and justification of the applicability of generalized sketches for formalizing practical modeling notations. We extend the sketch formalism by dependencies between predicate symbols and develop new semantic notions based on the Instances-as-typed-structures idea. We show that this new framework fits in the general patterns of the institution theory and is well amenable to algebraic manipulations. Keywords: Diagrammatic modeling; model management; generic logic; categorical logic; diagram predicate; categorical sketch
Fri, 21 Nov 2008 00:00:00 GMThttp://hdl.handle.net/1956/97282008-11-21T00:00:00ZFusing a Transformation Language with an Open Compiler
http://hdl.handle.net/1956/9726
Fusing a Transformation Language with an Open Compiler
Kalleberg, Karl Trygve; Visser, Eelco
Journal article
<p>Program transformation systems provide powerful analysis and transformation frameworks as well as concise languages for language processing, but instantiating them for every subject language is an arduous task, most often resulting in half-completed frontends. Compilers provide mature frontends with robust parsers and type checkers, but solving language processing problems in general-purpose languages without transformation libraries is tedious. Reusing these frontends with existing transformation systems is therefore attractive. However, for this reuse to be optimal, the functional logic found in the frontend should be exposed to the transformation system – simple data serialization of the abstract syntax tree is not enough, since this fails to expose important compiler functionality, such as import graphs, symbol tables and the type checker.</p>
<p>In this paper, we introduce a novel and general technique for combining term-based transformation systems with existing language frontends. The technique is presented in the context of a scriptable analysis and transformation framework for Java built on top of the Eclipse Java compiler. The framework consists of an adapter automatically extracted from the abstract syntax tree of the compiler and an interpreter for the Stratego program transformation language. The adapter allows the Stratego interpreter to rewrite directly on the compiler AST. We illustrate the applicability of our system with scripts written in Stratego that perform framework and library-specific analyses and transformations.</p>
Tue, 01 Apr 2008 00:00:00 GMThttp://hdl.handle.net/1956/97262008-04-01T00:00:00ZThe Second Rewrite Engines Competition
http://hdl.handle.net/1956/9725
The Second Rewrite Engines Competition
Durán, Francisco; Roldán, Manuel; Balland, Emilie; van den Brand, Mark; Eker, Steven; Kalleberg, Karl Trygve; Kats, Lennart C. L.; Moreau, Pierre-Etienne; Schevchenko, Ruslan; Visser, Eelco
Journal article
The Second Rewrite Engines Competition (REC) was celebrated as part of the 7th Workshop on Rewriting Logic and its Applications (WRLA 2008). In this edition of the competition participated five systems, namely ASF+SDF, Maude, Stratego/XT, TermWare, and Tom. We explain here how the competition was organized and conducted, and present its main results and conclusions.
Mon, 29 Jun 2009 00:00:00 GMThttp://hdl.handle.net/1956/97252009-06-29T00:00:00ZDomain-Specific Languages for Composable Editor Plugins
http://hdl.handle.net/1956/9721
Domain-Specific Languages for Composable Editor Plugins
Kats, Lennart C. L.; Kalleberg, Karl Trygve; Visser, Eelco
Journal article
Modern IDEs increase developer productivity by incorporating many different kinds of editor services. These can be purely syntactic, such as syntax highlighting, code folding, and an outline for navigation; or they can be based on the language semantics, such as in-line type error reporting and resolving identifier declarations. Building all these services from scratch requires both the extensive knowledge of the sometimes complicated and highly interdependent APIs and extension mechanisms of an IDE framework, and an in-depth understanding of the structure and semantics of the targeted language. This paper describes Spoofax/IMP, a meta-tooling suite that provides high-level domain-specific languages for describing editor services, relieving editor developers from much of the framework-specific programming. Editor services are defined as composable modules of rules coupled to a modular SDF grammar. The composability provided by the SGLR parser and the declaratively defined services allows embedded languages and language extensions to be easily formulated as additional rules extending an existing language definition. The service definitions are used to generate Eclipse editor plugins. We discuss two examples: an editor plugin for WebDSL, a domain-specific language for web applications, and the embedding of WebDSL in Stratego, used for expressing the (static) semantic rules of WebDSL.
Fri, 17 Sep 2010 00:00:00 GMThttp://hdl.handle.net/1956/97212010-09-17T00:00:00ZExploring Subexponential Parameterized Complexity of Completion Problems
http://hdl.handle.net/1956/9589
Exploring Subexponential Parameterized Complexity of Completion Problems
Drange, Pål Grønås; Fomin, Fedor V.; Pilipczuk, Michal Pawel; Villanger, Yngve
Journal article
<p>Let F be a family of graphs. In the F-Completion problem, we are given an n-vertex graph
G and an integer k as input, and asked whether at most k edges can be added to G so that the
resulting graph does not contain a graph from F as an induced subgraph. It appeared recently
that special cases of F-Completion, the problem of completing into a chordal graph known
as Minimum Fill-in, corresponding to the case of F = {C4,C5,C6, . . .}, and the problem of
completing into a split graph, i.e., the case of F = {C4, 2K2,C5}, are solvable in parameterized
subexponential time 2O(√ k log k)nO(1). The exploration of this phenomenon is the main motivation
for our research on F-Completion.</p>
<p>In this paper we prove that completions into several well studied classes of graphs without
long induced cycles also admit parameterized subexponential time algorithms by showing that:</p>
<p>The problem Trivially Perfect Completion is solvable in parameterized subexponential
time 2O(√k log k)nO(1), that is F-Completion for F = {C4, P4}, a cycle and a path on four
vertices.</p>
<p>The problems known in the literature as Pseudosplit Completion, the case where F =
{2K2,C4}, and Threshold Completion, where F = {2K2, P4,C4}, are also solvable in
time 2O(√k log k)nO(1).</p>
<p>We complement our algorithms for F-Completion with the following lower bounds:</p>
<p>For F = {2K2}, F = {C4}, F = {P4}, and F = {2K2, P4}, F-Completion cannot be
solved in time 2o(k)nO(1) unless the Exponential Time Hypothesis (ETH) fails.</p>
<p>Our upper and lower bounds provide a complete picture of the subexponential parameterized
complexity of F-Completion problems for F ⊆ {2K2,C4, P4}.</p>
Presented at the 31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014)
Wed, 19 Feb 2014 00:00:00 GMThttp://hdl.handle.net/1956/95892014-02-19T00:00:00ZOn cutwidth parameterized by vertex cover
http://hdl.handle.net/1956/9561
On cutwidth parameterized by vertex cover
Cygan, Marek; Lokshtanov, Daniel; Pilipczuk, Marcin; Pilipczuk, Michal Pawel; Saurabh, Saket
Journal article
We study the CUTWIDTH problem, where the input is a graph G, and the
objective is find a linear layout of the vertices that minimizes the maximum number of
edges intersected by any vertical line inserted between two consecutive vertices. We
give an algorithm for CUTWIDTH with running time O(2knO(1)). Here k is the size of
a minimum vertex cover of the input graph G, and n is the number of vertices in G.
Our algorithm gives an O(2n/2nO(1)) time algorithm for CUTWIDTH on bipartite
graphs as a corollary. This is the first non-trivial exact exponential time algorithm for
CUTWIDTH on a graph class where the problem remains NP-complete. Additionally,
we show that CUTWIDTH parameterized by the size of the minimum vertex cover of
the input graph does not admit a polynomial kernel unless NP ⊆ coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both Treewidth and Pathwidth parameterized by vertex cover do admit polynomial kernels.
Tue, 01 Apr 2014 00:00:00 GMThttp://hdl.handle.net/1956/95612014-04-01T00:00:00ZParameterized complexity of Eulerian deletion problems
http://hdl.handle.net/1956/9520
Parameterized complexity of Eulerian deletion problems
Cygan, Marek; Pilipczuk, Marcin; Marx, Dániel; Pilipczuk, Michal Pawel; Schlotter, Ildikó
Journal article
We study a family of problems where the goal is to make a graph Eulerian,
i.e., connected and with all the vertices having even degrees, by a minimum
number of deletions. We completely classify the parameterized complexity of various
versions: undirected or directed graphs, vertex or edge deletions, with or without
the requirement of connectivity, etc. The collection of results shows an interesting
contrast: while the node-deletion variants remain intractable, i.e., W[1]-hard for
all the studied cases, edge-deletion problems are either fixed-parameter tractable or
polynomial-time solvable. Of particular interest is a randomized FPT algorithm for
making an undirected graph Eulerian by deleting the minimum number of edges,
based on a novel application of the color coding technique. For versions that remain
NP-complete but fixed-parameter tractable we consider also possibilities of polynomial kernelization; unfortunately, we prove that this is not possible unless
NP ⊆ coNP/poly.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/95202014-01-01T00:00:00ZSolving the 2-disjoint connected subgraphs problem faster than 2ⁿ
http://hdl.handle.net/1956/9519
Solving the 2-disjoint connected subgraphs problem faster than 2ⁿ
Cygan, Marek; Pilipczuk, Marcin; Pilipczuk, Michal Pawel; Wojtaszczyk, Jakub Onufry
Journal article
The 2-DISJOINT CONNECTED SUBGRAPHS problem, given a graph along
with two disjoint sets of terminals Z1,Z2, asks whether it is possible to find disjoint
sets A1,A2, such that Z1 ⊆ A1, Z2 ⊆ A2 and A1,A2 induce connected subgraphs.
While the naive algorithm runs in O(2nnO(1)) time, solutions with complexity
of form O((2 − ε)n) have been found only for special graph classes (van ’t Hof
et al. in Theor. Comput. Sci. 410(47–49):4834–4843, 2009; Paulusma and van Rooij
in Theor. Comput. Sci. 412(48):6761–6769, 2011). In this paper we present an
O(1.933n) algorithm for 2-DISJOINT CONNECTED SUBGRAPHS in general case,
thus breaking the 2n barrier. As a counterpoise of this result we show that if we parameterize
the problem by the number of non-terminal vertices, it is hard both to
speed up the brute-force approach and to find a polynomial kernel.
Wed, 01 Oct 2014 00:00:00 GMThttp://hdl.handle.net/1956/95192014-10-01T00:00:00ZManaging spatial selections with contextual snapshots
http://hdl.handle.net/1956/9464
Managing spatial selections with contextual snapshots
Mindek, Peter; Gröller, Eduard; Bruckner, Stefan
Journal article
Spatial selections are a ubiquitous concept in visualization. By localizing particular features, they can be analysed and compared
in different views. However, the semantics of such selections often depend on specific parameter settings and it can be difficult
to reconstruct them without additional information. In this paper, we present the concept of contextual snapshots as an effective
means for managing spatial selections in visualized data. The selections are automatically associated with the context in which
they have been created. Contextual snapshots can also be used as the basis for interactive integrated and linked views, which
enable in-place investigation and comparison of multiple visual representations of data. Our approach is implemented as a
flexible toolkit with well-defined interfaces for integration into existing systems. We demonstrate the power and generality of our
techniques by applying them to several distinct scenarios such as the visualization of simulation data, the analysis of historical
documents and the display of anatomical data.
Mon, 01 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/94642014-12-01T00:00:00ZScaling the scales - A suggested improvement to IBM's Intelligent Recommendation Algorithm
http://hdl.handle.net/1956/9211
Scaling the scales - A suggested improvement to IBM's Intelligent Recommendation Algorithm
Myrtveit, Magnar
Master thesis
Recommender systems appear in a large variety of applications, and their use has become very common in recent years. As a lot of money can be made by companies having a better recommender system than their competitors, much of the research behind the best recommendation algorithms is proprietary and has not been published. We suggest an improvement to a graph-based collaborative filtering recommendation algorithm developed at IBM Research and published in "KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery data mining" in 1999: the Intelligent Recommendation Algorithm (IRA). We start by giving an overview of the field of recommender systems, how they work, and how they can be utilized. Next we give a detailed description of the graph-theoretical ideas IRA is built upon, and take an in-depth look at how the algorithm works. We then present a suggested improvement to IRA, called Scaling the scales. In Scaling the scales, customers using differently sized ranges of the rating scale can still be used to predict each other. We give a short overview of the design behind the implementation of our recommender system, which is available at GitHub (https://github.com/Stadly/Scaling-the-scales), as well as BORA (https://bora-uib-no.pva.uib.no), where this thesis is published. The implementation uses a modular software design to allow for developing extensions to the recommendation algorithms we have implemented, as well as deploying other recommendation algorithms in our system. Next we compare the recommendation quality of IRA and Scaling the scales using leave-one-out cross-validation. This method works by hiding a known rating, predicting what this rating should be, and comparing the correct and predicted ratings. Experiments on four different datasets were run. Our results indicate that Scaling the scales give slightly smaller errors than IRA when predicting ratings. A bigger improvement is that Scaling the scales calculates predictions in many cases where IRA is unsuccessful. Lastly, ideas for further improvement of Scaling the scales are presented.
Thu, 20 Nov 2014 00:00:00 GMThttp://hdl.handle.net/1956/92112014-11-20T00:00:00ZThe index tracking problem with a limit on portfolio size
http://hdl.handle.net/1956/9185
The index tracking problem with a limit on portfolio size
Mutunge, Purity Kamene
Master thesis
For a passive fund manager tracking a benchmark, it is not uncommon to select some, and not all the assets in the index to his portfolio. In this thesis, we consider the problem of minimizing the tracking error under the mean--variance formulation which gives us a quadratic objective function. Our model includes a cardinality constraint, that puts a limit on the portfolio size. Our problem is a mixed integer nonlinear problem with a convex, quadratic objective function. For this NP-Hard problem, we apply continuous as well as Lagrangian relaxations. We illustrate a subgradient algorithm, modified to our problem. We also present two construction and three improvement heuristics to this problem. Our approaches are compared to the results of an exact and an interrupted solver and computational time is of interest. Our data sets range from 50-400 (500), with real constituent weights from S&P Dow Jones Indices for the largest set of index.
Wed, 19 Nov 2014 00:00:00 GMThttp://hdl.handle.net/1956/91852014-11-19T00:00:00ZClassifying and Measuring Student Problems and Misconceptions
http://hdl.handle.net/1956/9081
Classifying and Measuring Student Problems and Misconceptions
Rosbach, Alexander Hoem; Bagge, Anya Helene
Chapter
In this paper we report on an attempt to classify student problems and
mistakes, and measuring the frequency of particular problems in a firstsemester
programming course. We also propose a scheme for annotating
student hand-ins which may be useful both in grading and in future research.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/90812013-01-01T00:00:00ZRate and power allocation for discrete-rate link adaptation
http://hdl.handle.net/1956/8985
Rate and power allocation for discrete-rate link adaptation
Gjendemsjø, Anders; Øien, Geir Egil; Holm, Henrik; Alouini, Mohamed-Slim; Gesbert, David; Hole, Kjell Jørgen; Orten, Pål
Journal article
Link adaptation, in particular adaptive coded modulation (ACM), is a promising tool for bandwidth-efficient transmission in a fading environment. The main motivation behind employing ACM schemes is to improve the spectral efficiency of wireless communication systems. In this paper, using a finite number of capacity achieving component codes, we propose new transmission schemes employing constant power transmission, as well as discrete- and continuous-power adaptation, for slowly varying flat-fading channels. We show that the proposed transmission schemes can achieve throughputs close to the Shannon limits of flat-fading channels using only a small number of codes. Specifically, using a fully discrete scheme with just four codes, each associated with four power levels, we achieve a spectral efficiency within 1 dB of the continuous-rate continuous-power Shannon capacity. Furthermore, when restricted to a fixed number of codes, the introduction of power adaptation has significant gains with respect to average spectral efficiency and probability of no transmission compared to a constant power scheme.
Wed, 09 Jan 2008 00:00:00 GMThttp://hdl.handle.net/1956/89852008-01-09T00:00:00ZA Hierarchical Splitting Scheme to Reveal Insight into Highly Self-Occluded Integral Surfaces
http://hdl.handle.net/1956/8965
A Hierarchical Splitting Scheme to Reveal Insight into Highly Self-Occluded Integral Surfaces
Brambilla, Andrea; Viola, Ivan; Hauser, Helwig
Journal article
In flow visualization, integral surfaces are of particular interest for
their ability to describe trajectories of massless particles. In areas
of swirling motion, integral surfaces can become very complex and
difficult to understand. Taking inspiration from traditional illustration
techniques, such as cut-aways and exploded views, we propose
a surface analysis tool based on surface splitting and focus+context
visualization. Our surface splitting scheme is hierarchical and at every
level of the hierarchy the best cut is chosen according to a surface
complexity metric. In order to make the interpretation of the resulting
pieces straightforward, cuts are always made along isocurves of
specific flow attributes. Moreover, a degree of interest can be specified,
so that the splitting procedure attempts to unveil the occluded
interesting areas. Through practical examples, we show that our approach
is able to overcome the lack of understanding originating from
structural occlusion.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/89652012-01-01T00:00:00ZIntegrated Multi-aspect Visualization of 3D Fluid Flows
http://hdl.handle.net/1956/8963
Integrated Multi-aspect Visualization of 3D Fluid Flows
Brambilla, Andrea; Andreassen, Øyvind; Hauser, Helwig
Chapter
The motion of a fluid is affected by several intertwined flow aspects.
Analyzing one aspect at a time can only yield partial
information about the flow behavior. More details can be revealed by
studying their interactions. Our approach enables the investigation
of these interactions by simultaneously visualizing meaningful flow
aspects, such as swirling motion and shear strain. We adopt the notions
of relevance and coherency. Relevance identifies locations where
a certain flow aspect is deemed particularly important. The related
piece of information is visualized by a specific visual entity, placed at
the corresponding location. Coherency instead represents the homogeneity
of a flow property in a local neighborhood. It is exploited in
order to avoid visual redundancy and to reduce occlusion and cluttering.
We have applied our approach to three CFD datasets, obtaining
meaningful insights.
The definitive version is available at http://diglib.eg.org/
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/89632013-01-01T00:00:00ZIllustrative Flow Visualization: State of the Art, Trends and Challenges
http://hdl.handle.net/1956/8962
Illustrative Flow Visualization: State of the Art, Trends and Challenges
Brambilla, Andrea; Carnecky, Robert; Peikert, Robert; Viola, Ivan; Hauser, Helwig
Journal article
Flow visualization is a well established branch of scientific visualization
and it currently represents an invaluable resource to
many fields, like automotive design, meteorology and medical imaging.
Thanks to the capabilities of modern hardware, flow datasets
are increasing in size and complexity, and traditional flow visualization
techniques need to be updated and improved in order to deal
with the upcoming challenges. A fairly recent trend to enhance the
expressiveness of scientific visualization is to produce depictions of
physical phenomena taking inspiration from traditional handcrafted
illustrations: this approach is known as illustrative visualization, and
it is getting a foothold in flow visualization as well.
In this state of the art report we give an overview of the existing
illustrative techniques for flow visualization, we highlight which
problems have been solved and which issues still need further investigation,
and, finally, we provide remarks and insights on the current
trends in illustrative flow visualization.
The definitive version is available at http://diglib.eg.org/
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/89622012-01-01T00:00:00ZVisibility-oriented Visualization Design for Flow Illustration
http://hdl.handle.net/1956/8961
Visibility-oriented Visualization Design for Flow Illustration
Brambilla, Andrea
Doctoral thesis
<p>Flow phenomena are ubiquitous in our world and they affect many aspects of our daily life. For this reason, they are the subject of extensive studies in several research fields. In medicine, the blood flow through our vessels can reveal important information about cardiovascular diseases. The air flow around a vehicle and the motion of fluids in a combustion engine are examples of relevant flow phenomena in engineering disciplines. Meteorologists, climatologists and oceanographers are instead concerned with winds and water currents.</p>
<p>Thanks to the recent advancements in computational fluid dynamics and to the increasing power of modern hardware, accurate simulations of flow phenomena are feasible nowadays. The evolution of multiple flow attributes, such as velocity, temperature and pressure, can be simulated over large spatial and temporal domains (4D). The amount of data generated by this process is massive, therefore visualization techniques are often adopted in order to ease the analysis phase. The overall goal is to convey information about the phenomena of interest through a suitable representation of the data at hand. Due to the multivariate and multidimensional nature of the data, visibility issues (such as cluttering and occlusion), represent a significant challenge.</p>
<p>Flow visualization can greatly benefit from studying and addressing visibility issues already in the design phase. In this thesis we investigate and demonstrate the effectiveness of taking visibility management into account early in the design process. We apply this principle to three characteristic flow visualization scenarios: (1) The simultaneous visualization of multiple flow attributes. (2) The visual inspection of single and multiple integral surfaces. (3) The selection of seeding curves for constructing families of integral surfaces. Our techniques result in clutter- and occlusion-free visualizations, which effectively illustrate the key aspects of the flow behavior.</p>
<p>For demonstration purposes, we have applied our approaches to a number of application cases. Additionally, we have discussed our visualization designs with domain experts. They showed a genuine interest in our work and provided insightful suggestions for future research directions.</p>
Thu, 18 Dec 2014 00:00:00 GMThttp://hdl.handle.net/1956/89612014-12-18T00:00:00ZParsing in a Broad Sense
http://hdl.handle.net/1956/8938
Parsing in a Broad Sense
Zaytsev, Vadim; Bagge, Anya Helene
Chapter
Having multiple representations of the same instance is common in software language engineering: models can be visualised as graphs, edited as text, serialised as XML. When mappings between such representations are considered, terms “parsing” and “unparsing” are often used with incompatible meanings and varying sets of underlying assumptions. We investigate 12 classes of artefacts found in software language processing, present a case study demonstrating their implementations and state-of-the-art mappings among them, and systematically explore the technical research space of bidirectional mappings to build on top of the existing body of work and discover as of yet unused relationships.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/89382014-01-01T00:00:00ZReflections on Courses for Software Language Engineering
http://hdl.handle.net/1956/8930
Reflections on Courses for Software Language Engineering
Bagge, Anya Helene; Lämmel, Ralf; Zaytsev, Vadim
Conference object
Software Language Engineering (SLE) has emerged as a field in computer science research and software engineering, but it has yet to become entrenched as part of the standard curriculum at universities. Many places have a compiler construction (CC) course and a programming languages (PL) course, but these are not aimed at training students in typical SLE matters such as DSL design and implementation, language workbenches, generalised parsers, and meta-tools. We describe our experiences with developing and teaching software language engineering courses at the Universities of Bergen and Koblenz-Landau. We reflect on lecture topics, assignments, development of course material, and other aspects and variation points in course design.
Wed, 01 Jan 2014 00:00:00 GMThttp://hdl.handle.net/1956/89302014-01-01T00:00:00ZSeparating Exceptional Concerns
http://hdl.handle.net/1956/8918
Separating Exceptional Concerns
Bagge, Anya Helene
Chapter
Traditional error handling mechanisms, including
exceptions, have several weaknesses that interfere with maintainability,
flexibility and genericity in software: Error code is
tangled with normal code; reporting is tangled with handling;
and generic code is locked into specific ways of reporting and
handling errors. We need to deal with errors in a declarative
way, where the concerns of errors, error reporting and error
handling are separated and dealt with individually by the
programmer.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/89182012-01-01T00:00:00ZInferring Required Permissions for Statically Composed Programs
http://hdl.handle.net/1956/8916
Inferring Required Permissions for Statically Composed Programs
Hasu, Tero; Bagge, Anya Helene; Haveraaen, Magne
Chapter
Permission-based security models are common in smartphone
operating systems. Such models implement access control for sensitive
APIs, introducing an additional concern for application developers. It is
important for the correct set of permissions to be declared for an application,
as too small a set is likely to result in runtime errors, whereas
too large a set may needlessly worry users. Unfortunately, not all platform
vendors provide tools support to assist in determining the set of
permissions that an application requires.
We present a language-based solution for permission management. It
entails the specification of permission information within a collection of
source code, and allows for the inference of permission requirements for a
chosen program composition. Our implementation is based on Magnolia,
a programming language demonstrating characteristics that are favorable
for this use case. A language with a suitable component system supports
permission management also in a cross-platform codebase, allowing abstraction
over different platform-specific implementations and concrete
permission requirements. When the language also requires any “wiring”
of components to be known at compile time, and otherwise makes design
tradeoffs that favor ease of static analysis, then accurate inference
of permission requirements becomes possible.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/89162013-01-01T00:00:00ZA Pretty Good Formatting Pipeline
http://hdl.handle.net/1956/8915
A Pretty Good Formatting Pipeline
Bagge, Anya Helene; Hasu, Tero
Chapter
Proper formatting makes the structure of a program apparent
and aids program comprehension. The need to format code arises in
code generation and transformation, as well as in normal reading and
editing situations. Commonly used pretty-printing tools in transformation
frameworks provide an easy way to produce indented code that is
fairly readable for humans, without reaching the level of purpose-built
reformatting tools, such as those built into IDEs. This paper presents a
library of pluggable components, built to support style-based formatting
and reformatting of code, and to enable further experimentation with
code formatting.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/89152013-01-01T00:00:00ZWalk Your Tree Any Way You Want
http://hdl.handle.net/1956/8914
Walk Your Tree Any Way You Want
Bagge, Anya Helene; Lämmel, Ralf
Chapter
Software transformations in the Nuthatch style are described
as walks over trees (possibly graphs) that proceed in programmerdefined
steps which may observe join points of the walk, may observe
and affect state associated with the walk, may rewrite the walked tree,
may contribute to a built tree, and must walk somewhere, typically along
one branch or another. The approach blends well with OO programming.
We have implemented the approach in the Nuthatch/J library for Java.
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/89142013-01-01T00:00:00ZCommunity-driven development for computational biology at Sprints, Hackathons and Codefests
http://hdl.handle.net/1956/8861
Community-driven development for computational biology at Sprints, Hackathons and Codefests
Möller, Steffen; Afgan, Enis; Banck, Michael; Bonnal, Raoul J. P.; Booth, Timothy; Chilton, John; Cock, Peter J. A.; Gumbel, Markus; Harris, Nomi; Holland, Richard; Kalaš, Matúš; Kaján, László; Kibukawa, Eri; Powel, David R.; Prins, Pjotr; Quinn, Jacqueline; Sallou, Olivier; Strozzi, Francesco; Seemann, Torsten; Sloggett, Clare; Soiland-Reyes, Stian; Spooner, William; Steinbiss, Sascha; Tille, Andreas; Travis, Anthony J.; Guimera, Roman V.; Katayama, Toshiaki; Chapman, Brad A.
Journal article
<p>Background: Computational biology comprises a wide range of technologies and approaches. Multiple technologies can be combined to create more powerful workflows if the individuals contributing the data or providing tools for its interpretation can find mutual understanding and consensus. Much conversation and joint investigation are required in order to identify and implement the best approaches. Traditionally, scientific conferences feature talks presenting novel technologies or insights, followed up by informal discussions during coffee breaks. In multi-institution collaborations, in order to reach agreement on implementation details or to transfer deeper insights in a technology and practical skills, a representative of one group typically visits the other. However, this does not scale well when the number of technologies or research groups is large. Conferences have responded to this issue by introducing Birds-of-a-Feather (BoF) sessions, which offer an opportunity for individuals with common interests to intensify their interaction. However, parallel BoF sessions often make it hard for participants to join multiple BoFs and find common ground between the different technologies, and BoFs are generally too short to allow time for participants to program together.</p>
<p>Results: This report summarises our experience with computational biology Codefests, Hackathons and Sprints, which are interactive developer meetings. They are structured to reduce the limitations of traditional scientific meetings described above by strengthening the interaction among peers and letting the participants determine the schedule and topics. These meetings are commonly run as loosely scheduled “unconferences” (self-organized identification of participants and topics for meetings) over at least two days, with early introductory talks to welcome and organize contributors, followed by intensive collaborative coding sessions. We summarise some prominent achievements of those meetings and describe differences in how these are organised, how their audience is addressed, and their outreach to their respective communities.</p>
<p>Conclusions: Hackathons, Codefests and Sprints share a stimulating atmosphere that encourages participants to jointly brainstorm and tackle problems of shared interest in a self-driven proactive environment, as well as providing an opportunity for new participants to get involved in collaborative projects.</p>
Thu, 27 Nov 2014 00:00:00 GMThttp://hdl.handle.net/1956/88612014-11-27T00:00:00ZContinuous Levels-of-Detail and Visual Abstraction for Seamless Molecular Visualization
http://hdl.handle.net/1956/8875
Continuous Levels-of-Detail and Visual Abstraction for Seamless Molecular Visualization
Parulek, Julius; Jönsson, Daniel; Ropinski, Timo; Bruckner, Stefan; Ynnerman, Anders; Viola, Ivan
Journal article
Molecular visualization is often challenged with rendering of large molecular structures in real time. We introduce a novel approach that enables us to show even large protein complexes. Our method is based on the level-of-detail concept, where we exploit three different abstractions combined in one visualization. Firstly, molecular surface abstraction exploits three different surfaces, solvent-excluded surface (SES), Gaussian kernels and van der Waals spheres, combined as one surface by linear interpolation. Secondly, we introduce three shading abstraction levels and a method for creating seamless transitions between these representations. The SES representation with full shading and added contours stands in focus while on the other side a sphere representation of a cluster of atoms with constant shading and without contours provide the context. Thirdly, we propose a hierarchical abstraction based on a set of clusters formed on molecular atoms. All three abstraction models are driven by one importance function classifying the scene into the near-, mid- and far-field. Moreover, we introduce a methodology to render the entire molecule directly using the A-buffer technique, which further improves the performance. The rendering performance is evaluated on series of molecules of varying atom counts.
Tue, 06 May 2014 00:00:00 GMThttp://hdl.handle.net/1956/88752014-05-06T00:00:00ZPerceptually Uniform Motion Space
http://hdl.handle.net/1956/8852
Perceptually Uniform Motion Space
Birkeland, Åsmund; Turkay, Cagatay; Viola, Ivan
Journal article
Flow data is often visualized by animated particles inserted into a flow field. The velocity of a particle on the screen is typically linearly scaled by the velocities in the data. However, the perception of velocity magnitude in animated particles is not necessarily linear. We present a study on how different parameters affect relative motion perception. We have investigated the impact of four parameters. The parameters consist of speed multiplier, direction, contrast type and the global velocity scale. In addition, we investigated if multiple motion cues, and point distribution, affect the speed estimation. Several studies were executed to investigate the impact of each parameter. In the initial results, we noticed trends in scale and multiplier. Using the trends for the significant parameters, we designed a compensation model, which adjusts the particle speed to compensate for the effect of the parameters. We then performed a second study to investigate the performance of the compensation model. From the second study we detected a constant estimation error, which we adjusted for in the last study. In addition, we connect our work to established theories in psychophysics by comparing our model to a model based on Stevens' Power Law.
Wed, 07 May 2014 00:00:00 GMThttp://hdl.handle.net/1956/88522014-05-07T00:00:00ZTMM@: a web application for the analysis of transmembrane helix mobility
http://hdl.handle.net/1956/8844
TMM@: a web application for the analysis of transmembrane helix mobility
Skjærven, Lars; Jonassen, Inge; Reuter, Nathalie
Journal article
<p>Background: To understand the mechanism by which a protein transmits a signal through the cell
membrane, an understanding of the flexibility of its transmembrane (TM) region is essential.
Normal Mode Analysis (NMA) has become the method of choice to investigate the slowest
motions in macromolecular systems. It has been widely used to study transmembrane channels and
pumps. It relies on the hypothesis that the vibrational normal modes having the lowest frequencies
(also named soft modes) describe the largest movements in a protein and are the ones that are
functionally relevant. In particular NMA can be used to study dynamics of TM regions, but no tool
making this approach available for non-experts, has been available so far.</p><p>Results: We developed the web-application TMM@ (TransMembrane α-helical Mobility analyzer).
It uses NMA to characterize the propensity of transmembrane α-helices to be displaced. Starting
from a structure file at the PDB format, the server computes the normal modes of the protein and
identifies which helices in the bundle are the most mobile. Each analysis is performed independently
from the others and results can be visualized using only a web browser. No additional plug-in or
software is required. For users who would like to further analyze the output data with their
favourite software, raw results can also be downloaded.</p><p>Conclusion: We built a novel and unique tool, TMM@, to study the mobility of transmembrane
α-helices. The tool can be applied to for example membrane transporters and provides biologists
studying transmembrane proteins with an approach to investigate which α-helices are likely to
undergo the largest displacements, and hence which helices are most likely to be involved in the
transportation of molecules in and out of the cell.</p>
Mon, 02 Jul 2007 00:00:00 GMThttp://hdl.handle.net/1956/88442007-07-02T00:00:00ZA simple spreadsheet-based, MIAME-supportive format for microarray data: MAGE-TAB
http://hdl.handle.net/1956/8843
A simple spreadsheet-based, MIAME-supportive format for microarray data: MAGE-TAB
Rayner, Tim F.; Rocca-Serra, Philippe; Spellman, Paul T.; Causton, Helen C.; Farne, Anna; Holloway, Ele; Irizarry, Rafael A.; Liu, Junmin; Maier, Donald S.; Miller, Michael; Petersen, Kjell; Quackenbush, John; Sherlock, Gavin; Stoeckert, Christian J. Jr; White, Joseph; Whetzel, Patricia L.; Wymore, Farrell; Parkinson, Helen; Sarkans, Ugis; Ball, Catherine A.; Brazma, Alvis
Journal article
<p>Background: Sharing of microarray data within the research community has been greatly
facilitated by the development of the disclosure and communication standards MIAME and MAGEML
by the MGED Society. However, the complexity of the MAGE-ML format has made its use
impractical for laboratories lacking dedicated bioinformatics support.</p><p>Results: We propose a simple tab-delimited, spreadsheet-based format, MAGE-TAB, which will
become a part of the MAGE microarray data standard and can be used for annotating and
communicating microarray data in a MIAME compliant fashion.</p><p>Conclusion: MAGE-TAB will enable laboratories without bioinformatics experience or support
to manage, exchange and submit well-annotated microarray data in a standard format using a
spreadsheet. The MAGE-TAB format is self-contained, and does not require an understanding of
MAGE-ML or XML.</p>
Mon, 06 Nov 2006 00:00:00 GMThttp://hdl.handle.net/1956/88432006-11-06T00:00:00ZInteractively illustrating polymerization using three-level model fusion
http://hdl.handle.net/1956/8761
Interactively illustrating polymerization using three-level model fusion
Kolesar, Ivan; Parulek, Julius; Viola, Ivan; Bruckner, Stefan; Stavrum, Anne-Kristin; Hauser, Helwig
Journal article
<p>Background: Research in cell biology is steadily contributing new knowledge about many aspects of physiological processes, both with respect to the involved molecular structures as well as their related function. Illustrations of the spatio-temporal development of such processes are not only used in biomedical education, but also can serve scientists as an additional platform for in-silico experiments.</p>
<p>Results: In this paper, we contribute a new, three-level modeling approach to illustrate physiological processes from the class of polymerization at different time scales. We integrate physical and empirical modeling, according to which approach best suits the different involved levels of detail, and we additionally enable a form of interactive steering, while the process is illustrated. We demonstrate the suitability of our approach in the context of several polymerization processes and report from a first evaluation with domain experts.</p>
<p>Conclusion: We conclude that our approach provides a new, hybrid modeling approach for illustrating the process of emergence in physiology, embedded in a densely filled environment. Our approach of a complementary fusion of three systems combines the strong points from the different modeling approaches and is capable to bridge different spatial and temporal scales.</p>
Tue, 14 Oct 2014 00:00:00 GMThttp://hdl.handle.net/1956/87612014-10-14T00:00:00ZIdentifying elemental genomic track types and representing them uniformly
http://hdl.handle.net/1956/8724
Identifying elemental genomic track types and representing them uniformly
Gundersen, Sveinung; Kalaš, Matúš; Abul, Osman; Frigessi, Arnoldo; Hovig, Eivind; Sandve, Geir Kjetil
Journal article
<p>Background: With the recent advances and availability of various high-throughput sequencing technologies, data on many molecular aspects, such as gene regulation, chromatin dynamics, and the three-dimensional organization of DNA, are rapidly being generated in an increasing number of laboratories. The variation in biological context, and the increasingly dispersed mode of data generation, imply a need for precise, interoperable and flexible representations of genomic features through formats that are easy to parse. A host of alternative formats are currently available and in use, complicating analysis and tool development. The issue of whether and how the multitude of formats reflects varying underlying characteristics of data has to our knowledge not previously been systematically treated.</p>
<p>Results: We here identify intrinsic distinctions between genomic features, and argue that the distinctions imply that a certain variation in the representation of features as genomic tracks is warranted. Four core informational properties of tracks are discussed: gaps, lengths, values and interconnections. From this we delineate fifteen generic track types. Based on the track type distinctions, we characterize major existing representational formats and find that the track types are not adequately supported by any single format. We also find, in contrast to the XML formats, that none of the existing tabular formats are conveniently extendable to support all track types. We thus propose two unified formats for track data, an improved XML format, BioXSD 1.1, and a new tabular format, GTrack 1.0.</p>
<p>Conclusions: The defined track types are shown to capture relevant distinctions between genomic annotation tracks, resulting in varying representational needs and analysis possibilities. The proposed formats, GTrack 1.0 and BioXSD 1.1, cater to the identified track distinctions and emphasize preciseness, flexibility and parsing convenience.</p>
Fri, 30 Dec 2011 00:00:00 GMThttp://hdl.handle.net/1956/87242011-12-30T00:00:00ZThe male germ cell gene regulator CTCFL is functionally different from CTCF and binds CTCF-like consensus sites in a nucleosome composition-dependent manner
http://hdl.handle.net/1956/8717
The male germ cell gene regulator CTCFL is functionally different from CTCF and binds CTCF-like consensus sites in a nucleosome composition-dependent manner
Sleutels, Frank; Soochit, Widia; Bartkuhn, Marek; Heath, Helen; Dienstbach, Sven; Bergmaier, Philipp; Franke, Vedran; Rosa-Garrido, Manuel; van de Nobelen, Suzanne; Caesar, Lisa; van der Reijden, Michael I.J.A.; Bryne, Jan Christian; van Ijcken, Wilfred F.J.; Grootegoed, J. Anton; Delgado, M. Dolores; Lenhard, Boris; Renkawitz, Rainer; Grosveld, Frank; Galjart, Niels
Journal article
<p>Background: CTCF is a highly conserved and essential zinc finger protein expressed in virtually all cell types. In conjunction with cohesin, it organizes chromatin into loops, thereby regulating gene expression and epigenetic events. The function of CTCFL or BORIS, the testis-specific paralog of CTCF, is less clear.</p>
<p>Results: Using immunohistochemistry on testis sections and fluorescence-based microscopy on intact live seminiferous tubules, we show that CTCFL is only transiently present during spermatogenesis, prior to the onset of meiosis, when the protein co-localizes in nuclei with ubiquitously expressed CTCF. CTCFL distribution overlaps completely with that of Stra8, a retinoic acid-inducible protein essential for the propagation of meiosis. We find that absence of CTCFL in mice causes sub-fertility because of a partially penetrant testicular atrophy. CTCFL deficiency affects the expression of a number of testis-specific genes, including Gal3st1 and Prss50. Combined, these data indicate that CTCFL has a unique role in spermatogenesis. Genome-wide RNA expression studies in ES cells expressing a V5- and GFP-tagged form of CTCFL show that genes that are downregulated in CTCFL-deficient testis are upregulated in ES cells. These data indicate that CTCFL is a male germ cell gene regulator. Furthermore, genome-wide DNA-binding analysis shows that CTCFL binds a consensus sequence that is very similar to that of CTCF. However, only ~3,700 out of the ~5,700 CTCFL- and ~31,000 CTCF-binding sites overlap. CTCFL binds promoters with loosely assembled nucleosomes, whereas CTCF favors consensus sites surrounded by phased nucleosomes. Finally, an ES cell-based rescue assay shows that CTCFL is functionally different from CTCF.</p>
<p>Conclusions: Our data suggest that nucleosome composition specifies the genome-wide binding of CTCFL and CTCF. We propose that the transient expression of CTCFL in spermatogonia and preleptotene spermatocytes serves to occupy a subset of promoters and maintain the expression of male germ cell genes.</p>
Mon, 18 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/87172012-06-18T00:00:00ZHighlights from the Eighth International Society for Computational Biology (ISCB) Student Council Symposium 2012
http://hdl.handle.net/1956/8636
Highlights from the Eighth International Society for Computational Biology (ISCB) Student Council Symposium 2012
Goncearenco, Alexander; Grynberg, Priscila; Botvinnik, Olga B.; Macintyre, Geoff; Abeel, Thomas
Journal article
Abstract
The report summarizes the scientific content of the annual symposium organized by the Student Council of the International Society for Computational Biology (ISCB) held in conjunction with the Intelligent Systems for Molecular Biology (ISMB) conference in Long Beach, California on July 13, 2012.
Fri, 14 Dec 2012 00:00:00 GMThttp://hdl.handle.net/1956/86362012-12-14T00:00:00ZFreeContact: fast and free software for protein contact prediction from residue co-evolution
http://hdl.handle.net/1956/8604
FreeContact: fast and free software for protein contact prediction from residue co-evolution
Kaján, László; Hopf, Thomas A.; Kalaš, Matúš; Marks, Debora S.; Rost, Burkhard
Journal article
<p>Background: 20 years of improved technology and growing sequences now renders residue-residue contact constraints in large protein families through correlated mutations accurate enough to drive de novo predictions of protein three-dimensional structure. The method EVfold broke new ground using mean-field Direct Coupling Analysis (EVfold-mfDCA); the method PSICOV applied a related concept by estimating a sparse inverse covariance matrix. Both methods (EVfold-mfDCA and PSICOV) are publicly available, but both require too much CPU time for interactive applications. On top, EVfold-mfDCA depends on proprietary software.</p>
<p>Results: Here, we present FreeContact, a fast, open source implementation of EVfold-mfDCA and PSICOV. On a test set of 140 proteins, FreeContact was almost eight times faster than PSICOV without decreasing prediction performance. The EVfold-mfDCA implementation of FreeContact was over 220 times faster than PSICOV with negligible performance decrease. EVfold-mfDCA was unavailable for testing due to its dependency on proprietary software. FreeContact is implemented as the free C++ library “libfreecontact”, complete with command line tool “freecontact”, as well as Perl and Python modules. All components are available as Debian packages. FreeContact supports the BioXSD format for interoperability.</p>
<p>Conclusions: FreeContact provides the opportunity to compute reliable contact predictions in any environment (desktop or cloud).</p>
Wed, 26 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/86042014-03-26T00:00:00ZSketch-based Modelling and Conceptual Visualization of Geomorphological Processes for Interactive Scientific Communication
http://hdl.handle.net/1956/8570
Sketch-based Modelling and Conceptual Visualization of Geomorphological Processes for Interactive Scientific Communication
Natali, Mattia
Doctoral thesis
<p>Throughout this dissertation, solutions for rapid digitalization of ideas will be defined. More precisely, the focus is on interactive scientific sketching and communication of geology, where the result is a digital illustrative 3D model. Results are achieved through a sketch-based modelling approach which gives the user a more natural and intuitive modelling process, hence leading to a quicker definition of a geological illustration.</p>
<p>To be able to quickly externalize and communicate ones ideas as a digital 3D model, can be of importance. For instance, students may profit from explanations supported by interactive illustrations. Exchange of information and hypotheses between domain experts is also a targeted situation in our work. Furthermore, illustrative models are frequently employed in business, when decisional meetings take place for convincing the management that a project is worth to be funded.</p>
<p>An advantage of digital models is that they can be saved and they are easy to distribute. In contrast to 2D images or paper sketches, one can interact with digital 3D models, and they can be transferred on portable devices for easy access (for instance during geological field studies). Another advantage, compared to standard geological illustrations, is that if a model has been created with internal structures, it can be arbitrarily cut and inspected.</p>
<p>Different solutions for different aspects of subsurface geology are presented in this dissertation. To express folding and faulting processes, a first modelling approach based on cross-sectional sketches is introduced. User defined textures can be associated to each layer, and can then be deformed with sketch strokes, for communicating layer properties such as rock type and grain size.</p>
<p>A following contribution includes a simple and compact representation to model and visualize 3D stratigraphic models. With this representation, erosion and deposition of fluvial systems are easy to specify and display. Ancient river channels and other geological features, which are present in the subsurface, can be accessed by means of a volumetric representation.</p>
<p>Geological models are obtained and visualized by sequentially defining stratigraphic layers, where each layer represents a unique erosion or deposition event. Evolution of rivers and deltas is important for geologists when interpreting the stratigraphy of the subsurface, in particular because it changes the landscape morphology and because river deposits are potential hydrocarbon reservoirs.</p>
<p>Time plays a fundamental role in geological processes. Animations are well suited for communicating temporal change and a contribution in this direction is also given.</p>
<p>With the techniques developed in this thesis, it becomes possible to produce a range of geological scenarios. The focus is on enabling geologists to create their subsurface models by means of sketches, to quickly communicate concepts and ideas rather than detailed information.</p>
<p>Although the proposed techniques are simple to use and require little design effort, complex models can be realized.</p>
Fri, 19 Sep 2014 00:00:00 GMThttp://hdl.handle.net/1956/85702014-09-19T00:00:00ZElastic Grid Resources using Cloud Technologies
http://hdl.handle.net/1956/8567
Elastic Grid Resources using Cloud Technologies
Carlsen, Joachim Christoffer
Master thesis
A Large Ion Collider Experiment (ALICE) is one of four experiments at the
Large Hadron Collider (LHC) at CERN. The detectors in the ALICE experiment
produce data at a rate of 4 GB/s after being filtered and compressed online. The
data are stored and processed in a Grid system. A Grid system allows for sharing
globally distributed computing resources crossing administrative domains. The
ALICE collaboration have created its own Grid middleware called Alice Environment
(AliEn) to facilitate the processing and storage.
This project will examine a possible way of better utilizing AliEn computing
resources by using Cloud techniques, more specifically OpenStack[6] together
with the virtual appliance CernVM. Cloud techniques allow for adding and
removing virtual computing resources through an API, providing elasticity in a
computing center. This technique gives the possibility of removing the need for
physical dedicated AliEn computer resources, and instead make them disposable;
the virtual computing resources should only exist while needed.
This report will begin with a short general introduction and history of the
technologies used in this thesis, followed by an introduction to Grid technology and
AliEn. An introduction to Cloud technologies, OpenStack, and Virtual machines will
then follow. After introducing the main concepts and tools, a description of a testbed
and its setup process will be given, followed by an implementation of a prototype.
Lastly, a short performance test, evaluation of the prototype and conclusions will
follow.
Results show that implementing an elastic AliEn site using Cloud techniques is
indeed feasible. The solution give an overhead of ~2:30 minutes per AliEn job agent,
which is short compared to the lifespan of AliEn job agents, which normally is of 48
hours. Additionally, some possible ways of further reducing the overhead will be
described in this report.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/85672014-06-02T00:00:00ZOn the feasibility of distributed systems for interactive visual analysis of omics data
http://hdl.handle.net/1956/8552
On the feasibility of distributed systems for interactive visual analysis of omics data
Farag, Yehia Mohamed
Master thesis
The purpose of this thesis is to discuss the
feasibility of developing a distributed
interactive visual analysis omics system
demonstrating how selected modules from the
standalone J-Express Modularized application can
be converted into a web based distributed system
maintaining the original application's
interactivity and functionality.
A distributed system is considered as the main
architectural design, where the client-side will
mainly be responsible for managing the interactive
visualisation of the generated results, and the
server-side is responsible for the remote
processing and storage of the data files in
addition to sharing the data resources between the
users.
A prototype system of an interactive visual data
analysis of omics data will be implemented as an
online service to non-technical life science
researchers. This provides the opportunity to
address the main challenges and limitations of
such systems, as well as outlining the additive
benefits to the omics-data analysis process
achieved by the use of distributed systems.
The main goal for this thesis can thus be
summarised as:
Investigate the feasibility of developing a web
distributed system that supports interactive
visualisation of omics data analysis based on
converting selected modules from J-Express
Modularized, and discuss the main benefits and
limitations of such a system.
From the study and the prototype evaluation, we
conclude that even with the current limitations
for the browser memory and computing capabilities,
it is still feasible to develop a browser based
distributed system for interactive visual analysis
of omics data. This system will achieve good
performance, scalability, besides being biologist-
friendly.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/85522014-06-02T00:00:00ZSoftware modeling of the propagation of electromagnetic beams through different media
http://hdl.handle.net/1956/8549
Software modeling of the propagation of electromagnetic beams through different media
Bruket, Kjetil Rørvik
Master thesis
Propagation and focusing of electromagnetic beams through layered anisotropic medium is of interest in the field of optical data storage, where thin layers are mounted on glass substrates, display technology, where polarised light passes through liquid crystals, and in biological and material sciences, where objects are portrayed through thin sheet glass. In the case of computing a focused or diffracted field of a three-dimensional optical problem, it would be required to evaluate two-dimensional integrals with amplitude and phase functions. Obtaining numerical results for these kinds of problems, is a difficult task because of the rapidly oscillating integrands and the singularities in the Fresnel transmission and the reflection coefficients. If a layered medium is an anisotropic crystal, the numerical analysis is complicated significantly, as it gives rise to birefringence and mode coupling.
Research within this field is jointly being conducted by the Department of Engineering at the University College of Bergen [HIB] and the Department of Physics and Technology at the University of Bergen [UIB]. Their researchers have obtained both exact and asymptotic results for propagated and focused fields in uniaxial crystals, and they have been adopting various techniques to obtain numerical results. The double integrals, which can be used for obtaining the results, can be reduced to single integrals by means of applying parabolic approximations to the phase and amplitude functions, and even though the approximations tend to give numerical results, the procedure is time-consuming. As such obtaining numerical results from this sort of procedure might take a few minutes to several computing hours. To date, they have typically written software in Fortran, in order to achieve the numerical results, and this intermediary data is then used as input data for the MATLAB software, in order to present the final results.
This process of having to code one program for producing results that are passed to another, just to able to produce the final results, is convoluted, the use of different programs in series for producing the final result creating unnecessary work for the user. As such, the researchers have requested a singular software solution that will handle both the numerical calculations and the graphical presentation of the propagated electromagnetic fields from the initial data, which describes the characteristics of the medium and the type of electromagnetic wave.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/85492014-06-02T00:00:00ZFast methods to solve the pooling problem
http://hdl.handle.net/1956/8548
Fast methods to solve the pooling problem
Kejriwal, Anisha
Master thesis
In pipeline transportation of natural gas, simple network flow
problems are replaced by hard ones when bounds on the flow
quality are imposed.
The sources, typically represented by gas wells, provide flow of
unequal compositions.
For instance, some sources may be richer in undesired
contaminants, such as CO_2, than others.
At the terminals, constraints on the relative content of the
contaminant may be imposed.
Flow streams are blended at junction points in the network,
where the relative CO_2-content becomes a weighted average of
the relative CO_2-content in entering streams.
To account for the quality bounds at the terminals, the quality
therefore must be traced from the sources via junction points to
the terminals.
The problem of allocating flow at minimum cost is referred to as
the pooling problem when the above-mentioned quality bounds
are imposed.
It is known that the pooling problem is NP-hard, which means
that it is very unlikely that exact solutions can be found in
instances of large scale.
Some exact methods, based on strong mathematical
formulations and intended for instances of small and medium
size, have recently been developed.
However, the literature offers few approaches to approximation
algorithms and other inexact methods dedicated for large
pooling problem instances.
This thesis focuses on the development of inexact or heuristic
techniques for the pooling problem.
The aim of these techniques is to find good feasible solutions for
large pooling problem instances at a reasonable computation
cost, and
the methods do not guarantee global optimality.
In order to achieve this, three approaches are discussed in this
thesis.
First, we propose an improvement heuristic which iteratively
reduces the total cost.
%new line
Since the quality of the solutions provided by the improvement
method depends upon good initial solutions,
we propose construction heuristic methods that give good
feasible solutions for the pooling problem.
The methods construct a sequence of sub-graphs, each of which
contains a single terminal, and an associated linear program for
optimizing the flow to the terminal.
The optimal solution to each linear program serves as a feasible
augmentation of total flow accumulated so far.
Finally, we combine both the above mentioned methods, such
that the solution given by the construction heuristic is used as
the starting solution by the improvement method.
Computational experiments indicate that all the heuristic
methods proposed in this thesis are faster compared to the
heuristics that were proposed earlier.
Since the exact solutions are not known in large instances, the
solutions given by the heuristic methods are compared to lower
bounds on the optimal objective function value.
In this thesis, we also propose a constraint generation algorithm,
that aims to compute lower bounds on the minimum cost fast.
Sat, 31 May 2014 00:00:00 GMThttp://hdl.handle.net/1956/85482014-05-31T00:00:00ZLinear dependencies between non-uniform distributions in DES
http://hdl.handle.net/1956/8545
Linear dependencies between non-uniform distributions in DES
Fauskanger, Stian
Master thesis
Davies and Murphy explained some non-uniform distributions of
the output from pairs and triplets of S-boxes in DES, and how they
are completely dependent on some key bits. There are linear
dependencies between these distributions. In this thesis, we
describe these linear dependencies. We also describe linear
dependencies between the distributions of the output from three
adjacent S-boxes after n rounds in DES. We have found all linear
dependencies between the distributions of the output from 5 of the
S-box triplets in full DES. The dependencies originates from
properties common to all S-boxes in DES.
Fri, 30 May 2014 00:00:00 GMThttp://hdl.handle.net/1956/85452014-05-30T00:00:00ZBioHackathon series in 2011 and 2012: penetration of ontology and linked data in life science domains
http://hdl.handle.net/1956/8517
BioHackathon series in 2011 and 2012: penetration of ontology and linked data in life science domains
Katayama, Toshiaki; Wilkinson, Mark D.; Aoki-Kinoshita, Kiyoko F.; Kawashima, Shuichi; Yamamoto, Yasunori; Yamaguchi, Atsuko; Okamoto, Shinobu; Kawano, Shin; Kim, Jin-Dong; Wang, Yue; Wu, Hongyan; Kano, Yoshinobu; Ono, Hiromasa; Bono, Hidemasa; Kocbek, Simon; Aerts, Jan; Akune, Yukie; Antezana, Erick; Arakawa, Kazuharu; Aranda, Bruno; Baran, Joachim; Bolleman, Jerven; Bonnal, Raoul J. P.; Buttigieg, Pier Luigi; Campbell, Matthew P.; Chen, Yi-an; Chiba, Hirokazu; Cock, Peter J. A.; Cohen, K. Bretonnel; Constantin, Alexandru; Duck, Geraint; Dumontier, Michel; Fujisawa, Takatomo; Fujiwara, Toyofumi; Goto, Naohisa; Hoehndorf, Robert; Igarashi, Yoshinobu; Itaya, Hidetoshi; Ito, Maori; Iwasaki, Wataru; Kalaš, Matúš; Katoda, Takeo; Kim, Taehong; Kokubu, Anna; Komiyama, Yusuke; Kotera, Masaaki; Laibe, Camille; Lapp, Hilmar; Lütteke, Thomas; Marshall, M. Scott; Mori, Takaaki; Mori, Hiroshi; Morita, Mizuki; Murakami, Katsuhiko; Nakao, Mitsuteru; Narimatsu, Hisashi; Nishide, Hiroyo; Nishimura, Yosuke; Nyström-Persson, Johan; Ogishima, Soichi; Okamura, Yasunobu; Okuda, Shujiro; Oshita, Kazuki; Packer, Nicki H; Prins, Pjotr; Ranzinger, Rene; Rocca-Serra, Philippe; Sansone, Susanna; Sawaki, Hiromichi; Shin, Sung-Ho; Splendiani, Andrea; Strozzi, Francesco; Tadaka, Shu; Toukach, Philip; Uchiyama, Ikuo; Umezaki, Masahito; Vos, Rutger; Whetzel, Patricia L.; Yamada, Issaku; Yamasaki, Chisato; Yamashita, Riu; York, William S.; Zmasek, Christian M.; Kawamoto, Shoko; Takagi, Toshihisa
Journal article
Abstract
The application of semantic technologies to the integration of biological data and the interoperability of bioinformatics analysis and visualization tools has been the common theme of a series of annual BioHackathons hosted in Japan for the past five years. Here we provide a review of the activities and outcomes from the BioHackathons held in 2011 in Kyoto and 2012 in Toyama. In order to efficiently implement semantic technologies in the life sciences, participants formed various sub-groups and worked on the following topics: Resource Description Framework (RDF) models for specific domains, text mining of the literature, ontology development, essential metadata for biological databases, platforms to enable efficient Semantic Web technology development and interoperability, and the development of applications for Semantic Web data. In this review, we briefly introduce the themes covered by these sub-groups. The observations made, conclusions drawn, and software development projects that emerged from these activities are discussed.
Wed, 05 Feb 2014 00:00:00 GMThttp://hdl.handle.net/1956/85172014-02-05T00:00:00ZWhole genome sequencing of the fish pathogen Francisella noatunensis subsp. orientalis Toba04 gives novel insights into Francisella evolution and pathogenecity
http://hdl.handle.net/1956/8516
Whole genome sequencing of the fish pathogen Francisella noatunensis subsp. orientalis Toba04 gives novel insights into Francisella evolution and pathogenecity
Sridhar, Settu; Sharma, Animesh; Kongshaug, Heidi; Nilsen, Frank; Jonassen, Inge
Journal article
<p>Background: Francisella is a genus of gram-negative bacterium highly virulent in fishes and human where F.
tularensis is causing the serious disease tularaemia in human. Recently Francisella species have been reported to
cause mortality in aquaculture species like Atlantic cod and tilapia. We have completed the sequencing and draft
assembly of the Francisella noatunensis subsp. orientalisToba04 strain isolated from farmed Tilapia. Compared to
other available Francisella genomes, it is most similar to the genome of Francisella philomiragia subsp. philomiragia,
a free-living bacterium not virulent to human.</p><p>Results: The genome is rearranged compared to the available Francisella genomes even though we found no
IS-elements in the genome. Nearly 16% percent of the predicted ORFs are pseudogenes. Computational pathway
analysis indicates that a number of the metabolic pathways are disrupted due to pseudogenes. Comparing the
novel genome with other available Francisella genomes, we found around 2.5% of unique genes present in
Francisella noatunensis subsp. orientalis Toba04 and a list of genes uniquely present in the human-pathogenic
Francisella subspecies. Most of these genes might have transferred from bacterial species through horizontal gene
transfer. Comparative analysis between human and fish pathogen also provide insights into genes responsible for
pathogenecity. Our analysis of pseudogenes indicates that the evolution of Francisella subspecies’s pseudogenes
from Tilapia is old with large number of pseudogenes having more than one inactivating mutation.</p><p>Conclusions: The fish pathogen has lost non-essential genes some time ago. Evolutionary analysis of the Francisella
genomes, strongly suggests that human and fish pathogenic Francisella species have evolved independently from
free-living metabolically competent Francisella species. These findings will contribute to understanding the
evolution of Francisella species and pathogenesis.</p>
Tue, 06 Nov 2012 00:00:00 GMThttp://hdl.handle.net/1956/85162012-11-06T00:00:00ZThe Weight Distributions of Several Classes of Cyclic Codes From APN Monomials
http://hdl.handle.net/1956/8430
The Weight Distributions of Several Classes of Cyclic Codes From APN Monomials
Li, Chunlei; Li, Nian; Helleseth, Tor; Ding, Cunsheng
Journal article
Let m ≥ 3 be an odd integer and p be an odd prime.
In this paper, a number of classes of three-weight cyclic codes C(1,e) over Fp, which have parity-check polynomial m1(x)me (x), are presented by examining general conditions on the
parameters p, m and e, where mi (x) is the minimal
polynomial of π−i over Fp for a primitive element π of
Fpm . Furthermore, for p ≡ 3 (mod 4) and a positive integer e satisfying (pk + 1) · e ≡ 2 (mod pm
− 1) for some positive integer k with gcd(m, k) = 1, the value distributions of the exponential sums T(a, b) = ∑
x∈Fpm
ωTr(ax+bxe )
and S(a, b, c) = ∑
x∈Fpm
ωTr(ax+bxe +cxs ), where s = (pm
− 1)/2,
are determined. As an application, the value distribution of S(a, b, c) is utilized to derive the
weight distribution of the cyclic codes C(1,e,s) with parity-check polynomial m1(x)me (x)ms (x).
In the case of p = 3 and even e satis- fying the above condition, the dual of the cyclic code
C(1,e,s)
has optimal minimum distance.
Fri, 01 Aug 2014 00:00:00 GMThttp://hdl.handle.net/1956/84302014-08-01T00:00:00ZOptimal ternary cyclic codes with minimum distance four and five
http://hdl.handle.net/1956/8429
Optimal ternary cyclic codes with minimum distance four and five
Li, Nian; Li, Chunlei; Helleseth, Tor; Ding, Cunsheng; Tang, Xiaohu
Journal article
Cyclic codes are an important subclass of linear codes and
have wide applications in data storage systems, communication
systems and consumer electronics. In this paper, two
families of optimal ternary cyclic codes are presented. The
first family of cyclic codes has parameters [3m−1,3m−1−2m,4] and contains a class of conjectured cyclic codes and
several new classes of optimal cyclic codes. The second family
of cyclic codes has parameters [3m−1,3m−2−2m,5]
and contains a number of classes of cyclic codes that are
obtained from perfect nonlinear functions over F3m, where
m > 1 and is a positive integer.
Sat, 01 Nov 2014 00:00:00 GMThttp://hdl.handle.net/1956/84292014-11-01T00:00:00ZSequences and Linear Codes from Highly Nonlinear Functions
http://hdl.handle.net/1956/8414
Sequences and Linear Codes from Highly Nonlinear Functions
Li, Chunlei
Doctoral thesis
Due to optimal nonlinearity and differential uniformity, perfect nonlinear
(PN) and almost perfect nonlinear (APN) functions are of great
importance in cryptography. It is interesting that they also define optimal
objects in other domains of mathematics and information theory.
This dissertation is devoted to exploring the application of highly
nonlinear functions, especially PN and APN functions, to the construction
of low-correlation sequences and optimal linear codes. For an
arbitrary odd prime p, there are only two basic classes of two-level
auto-correlation p-ary sequences with no subfield structures: the msequences
and the Helleseth-Gong sequences, where Helleseth-Gong
sequences are closely related to a class of p-ary perfect nonlinear functions.
Papers I and II are dedicated to investigating the cross-correlation
between the p-ary m-sequences and d-decimated Helleseth-Gong sequences
for some decimations d, and to constructing sequence families
with low correlation from them. Papers III-IV have focused on the study
of linear codes defined from highly nonlinear functions. Paper III utilizes
some highly nonlinear functions including PN and APN functions
to construct ternary cyclic codes with the optimal minimum (Hamming)
distance. Paper IV further investigates the weight distribution of some
optimal cyclic codes proposed in Paper III. Paper V examines the covering
radius of some linear codes defined from PN and APN functions
and presents a number of quasi-perfect linear codes.
Mon, 16 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/84142014-06-16T00:00:00ZPractical Aspects of the Graph Parameter Boolean-width
http://hdl.handle.net/1956/8406
Practical Aspects of the Graph Parameter Boolean-width
Sharmin, Sadia
Doctoral thesis
Mon, 18 Aug 2014 00:00:00 GMThttp://hdl.handle.net/1956/84062014-08-18T00:00:00ZData clustering optimization with visualization
http://hdl.handle.net/1956/8362
Data clustering optimization with visualization
Guillaume, Fabien
Master thesis
This thesis study the possible applications of
Particle Swarm Optimization in Kernel Clustering,
Dynamic modeling and Artificial Neural Network.
Thu, 20 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/83622014-03-20T00:00:00ZA polynomial-time solvable case for the NP-hard problem Cutwidth
http://hdl.handle.net/1956/8355
A polynomial-time solvable case for the NP-hard problem Cutwidth
Lilleeng, Simen
Master thesis
The Cutwidth problem is a notoriously hard problem, and its complexity is open on several interesting graph classes. Motivated by this fact we investigate the problem on superfragile graphs, a graph class on which the complexity of the Cutwidth problem is open. We give an algorithm that solves Cutwidth on superfragile graphs in O(n^2) time and O(n) space, thus resolving the complexity of the Cutwidth problem on superfragile graphs. We also explore the usefulness of the algorithm for cutwidth on threshold graphs by Heggernes, Lokshtanov, Mihai and Papadopoulos as an approximation algorithm for cutwidth on other graph classes. The Cutwidth problem is NP-hard for general graphs and a brute force algorithm would require O(n!) time. We give two faster algorithms solving the Cutwidth problem: One algorithm applying dynamic programming that runs in O^*(2^n) time and space, and one algorithm that runs in O^*(4^n) time and O(n\cdot log(n)) space by applying the divide-and-conquer technique. Finally we take a look at a similar problem called Optimal Linear Arrangement and suggest algorithms for solving the problem on threshold graphs and superfragile graphs in polynomial time.
Mon, 02 Jun 2014 00:00:00 GMThttp://hdl.handle.net/1956/83552014-06-02T00:00:00ZSo you've got IPv6 address space. Can you defend it?
http://hdl.handle.net/1956/8336
So you've got IPv6 address space. Can you defend it?
Sande, Mikal
Master thesis
Internet Protocol version 6 (IPv6) is the successor of Internet Protocol version 4 (IPv4). IPv6 will become the next standard networking protocol on the Internet. It brings with it a great increase in address space, changes to network operations, and new network security concerns. In this thesis we examine IPv6 from a security perspective. The security of IPv6 is important to all protocols that use IPv6 on the Internet. The goal of this thesis is to introduce the reader to existing IPv6 security challenges, demonstrate how IPv6 changes network security and show how IPv6 is being improved.
Thu, 29 May 2014 00:00:00 GMThttp://hdl.handle.net/1956/83362014-05-29T00:00:00ZVisual Cavity Analysis in Molecular Simulations
http://hdl.handle.net/1956/8305
Visual Cavity Analysis in Molecular Simulations
Parulek, Julius; Turkay, Cagatay; Reuter, Nathalie; Viola, Ivan
Journal article
Molecular surfaces provide a useful mean for analyzing interactions between biomolecules; such as identification and characterization of ligand binding sites to a host macromolecule. We present a novel technique, which extracts potential binding sites, represented by cavities, and characterize them by 3D graphs and by amino acids. The binding sites are extracted using an implicit function sampling and graph algorithms. We propose an advanced cavity exploration technique based on the graph parameters and associated amino acids. Additionally, we interactively visualize the graphs in the context of the molecular surface. We apply our method to the analysis of MD simulations of Proteinase 3, where we verify the previously described cavities and suggest a new potential cavity to be studied.
Tue, 12 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/83052013-11-12T00:00:00ZImproving Parallel Sparse Matrix-vector Multiplication
http://hdl.handle.net/1956/8024
Improving Parallel Sparse Matrix-vector Multiplication
Tessem, Torbjørn Johnsen
Master thesis
Sparse Matrix-vector Multiplication (SMvM) is a mathematical technique encountered in many programs and computations and is often heavily used. Solving SMvM in parallel allows for bigger instances to be solved, and problems to be solved faster. Several strategies have been tried to improve parallel SMvM. Work has been done with regard to improved cache use, better load balance and reduced conflicts. The aim of the work conducted in this thesis is to develop new ideas and algorithms to speed-up parallel SMvM on a shared memory computer. We use a method inspired by the min-makespan problem to distribute elements more evenly. We introduce a hybrid algorithm that gives better cache efficiency, and we work with colouring algorithms to avoid write conflicts.
Thu, 19 Dec 2013 00:00:00 GMThttp://hdl.handle.net/1956/80242013-12-19T00:00:00ZSubstation Location in Offshore Wind Farms - A Planar Multi-Facility Location-Routing Problem
http://hdl.handle.net/1956/8017
Substation Location in Offshore Wind Farms - A Planar Multi-Facility Location-Routing Problem
Amland, Thomas
Master thesis
In offshore wind farms, two important parts of the design are to determine locations for substations and a cabling layout that connects every turbine to a substation. These problems are interconnected, as the cable layout depends on the choice of location for the substation. In this thesis we investigate how to set the location of substations such that the total cable cost is minimized.
Fri, 14 Mar 2014 00:00:00 GMThttp://hdl.handle.net/1956/80172014-03-14T00:00:00ZThroughput and robustness of bioinformatics pipelines for genome-scale data analysis
http://hdl.handle.net/1956/7906
Throughput and robustness of bioinformatics pipelines for genome-scale data analysis
Sztromwasser, Paweł
Doctoral thesis
<p>The post-genomic era has been heavily influenced by the rapid development of highthroughput
molecular-screening technologies, which has enabled genome-wide analysis
approaches on an unprecedented scale. The constantly decreasing cost of producing
experimental data resulted in a data deluge, which has led to technical challenges
in distributed bioinformatics infrastructure and computational biology methods. At the
same time, the advances in deep-sequencing allowed intensified interrogation of human
genomes, leading to prominent discoveries linking our genetic makeup with numerous
medical conditions. The fast and cost-effective sequencing technology is expected to
soon become instrumental in personalized medicine. The transition of the methodology
related to genome sequencing and high-throughput data analysis from the research
domain to a clinical service is challenging in many aspects. One of them is providing
medical personnel with accessible, robust, and accurate methods for analysis of
sequencing data.</p><p>The computational protocols used for analysis of the sequencing data are complex,
parameterized, and in continuous development, making results of data analysis sensitive
to factors such as the software used and the parameter values selected. However,
the influence of parameters on results of computational pipelines has not been systematically
studied. To fill this gap, we investigated the robustness of a genetic variant
discovery pipeline against changes of its parameter settings. Using two sensitivity
screening methods, we evaluated parameter influence on the identified genetic variants,
and found that the parameters have irregular effects and are inter-dependent. Only a
fraction of parameters were identified to have considerable impact on the results, suggesting
that screening parameter sensitivity can lead to simpler pipeline configuration.
Our results showed, that although a simple metric can be used to examine parameter
influence, more informative results are obtained using a criterion related to the accuracy
of pipeline results. Using the results of sensitivity screening, we have shown that
the influential pipeline parameters can be adjusted to effectively increase the accuracy
of variant discovery. Such information is invaluable for researchers tuning pipeline parameters,
and can guide the search for optimal settings for computational pipelines in
a clinical setting. Contrasting the two applied screening methods, we learned more
about specific requirements of robustness analysis of computational methods, and were
able to suggest a more tailored strategy for parameter screening. Our contributions
demonstrate the importance and the benefits of systematic robustness analysis of bioinformatics
pipelines, and indicate that more efforts are needed to advance research in
this area.</p><p>Web services are commonly used to provide interoperable, programmatic access to bioinformatics resources, and consequently, they are natural building blocks of bioinformatics
analysis workflows. However, in the light of the data deluge, their usability
for data-intensive applications has been questioned. We investigated applicability of
standard Web services to high-throughput pipelines, and showed how throughput and
performance of such pipelines can be improved. By developing two complementary approaches,
that take advantage of established and proven optimization mechanisms, we
were able to enhance Web service communication in a non-intrusive manner. The first
strategy increases throughput ofWeb service interfaces by a stream-like invocation pattern.
This additionally allows for data-pipelining between consecutive steps of a workflow.
The second approach facilitated peer-to-peer data transfer between Web services
to increase the capacity of the workflow engine. We evaluated the impact of the enhancements
on genome-scale pipelines, and showed that high-throughput data analysis
using standard Web service pipelines is possible, when the technology is used sensibly.
However, considering the contemporary data volumes and their expected growth,
methods capable of handling even larger data should be sought.</p><p>Systematic analysis of pipeline robustness requires intensive computations, which are
particularly demanding for high-throughput pipelines. Providing more efficient methods
of pipeline execution is fundamental for enabling such examinations on a largescale.
Furthermore, the standardized interfaces of Web services facilitate automated
executions, and are perfectly suited for coordinating large computational experiments.
I speculate that, provided wide adoption of Web service technology in bioinformatics
pipelines, large-scale quality control studies, such as robustness analysis, could be
automated and performed routinely on newly published computational methods. This
work contributes to realizing such a conception, providing technical basis for building
the necessary infrastructure and suggesting methodology for robustness analysis.</p>
Wed, 19 Feb 2014 00:00:00 GMThttp://hdl.handle.net/1956/79062014-02-19T00:00:00ZDirect data transfer between SOAP web services in Orchestration
http://hdl.handle.net/1956/7905
Direct data transfer between SOAP web services in Orchestration
Subramanian, Sattanathan; Sztromwasser, Paweł; Puntervoll, Pål; Petersen, Kjell
Conference object
In scientific data analysis, workflows are used to integrate
and coordinate resources such as databases and tools. Workflows
are normally executed by an orchestrator that invokes
component services and mediates data transport between
them. Scientific data are frequently large, and brokering
large data increases the load on the orchestrator and reduces
workflow performance. To remedy this problem, we demonstrate
how plain SOAP web services can be tailored to support
direct service-to-service data transport, thus allowing
the orchestrator to delegate the data-flow. We formally define
a data-flow delegation message, develop an XML schema
for it, and analyze performance improvement of data-flow
delegation empirically in comparison with the regular orchestration
using an example bioinformatics workflow.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/79052012-01-01T00:00:00ZData partitioning enables the use of standard SOAP Web Services in genome-scale workflows
http://hdl.handle.net/1956/7904
Data partitioning enables the use of standard SOAP Web Services in genome-scale workflows
Sztromwasser, Paweł; Puntervoll, Pål; Petersen, Kjell
Journal article
Biological databases and computational biology tools are provided by research groups
around the world, and made accessible on the Web. Combining these resources is a com-
mon practice in bioinformatics, but integration of heterogeneous and often distributed tools
and datasets can be challenging. To date, this challenge has been commonly addressed in
a pragmatic way, by tedious and error-prone scripting. Recently however a more reliable
technique has been identified and proposed as the platform that would tie together bioinfor-
matics resources, namely Web Services. In the last decade the Web Services have spread
wide in bioinformatics, and earned the title of recommended technology. However, in the
era of high-throughput experimentation, a major concern regarding Web Services is their
ability to handle large-scale data traffic. We propose a stream-like communication pattern
for standard SOAP Web Services, that enables efficient flow of large data traffic between
a workflow orchestrator and Web Services. We evaluated the data-partitioning strategy
by comparing it with typical communication patterns on an example pipeline for genomic
sequence annotation. The results show that data-partitioning lowers resource demands of
services and increases their throughput, which in consequence allows to execute in-silico
experiments on genome-scale, using standard SOAP Web Services and workflows. As a
proof-of-principle we annotated an RNA-seq dataset using a plain BPEL workflow engine.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/79042011-01-01T00:00:00ZA Comparison of Vertex and Edge Partitioning Approaches for Parallel Maximal Matching
http://hdl.handle.net/1956/7896
A Comparison of Vertex and Edge Partitioning Approaches for Parallel Maximal Matching
Sørnes, Alexander N
Master thesis
This thesis will compare two ways of distributing data for parallel graph
algorithms: vertex and edge partitioning, using a distributed memory system.
Previous studies on the parallelization of graphs has often been focused on a
vertex partitioning, where each processor is assigned a set V' $\subseteq$ V
where G = (V,E), yielding a one-dimensional partitioning. It has been shown,
however, that an edge partitioning (or 2D partitioning), where each processor is
assigned a set E' $\subseteq$ E, may yield a benefit in terms of a lower
communication volume.
The performance and scalability of vertex and edge partitionings are
compared by implementing the Karp-Sipser matching set algorithm for both
partitioning schemes. A matching set is a set E' $\subseteq$ E of independent
edges such that each vertex in V occurs at most once in E'.
We find that while the vertex partitioned algorithm gives a significantly higher speedup,
the increased performance of the edge partitioned algorithm on more dense graphs suggests that
if the graph framework is improved further, it could lead to the implementation of an
edge partitioned matching algorithm that offers better scalability and comparable matching quality
to a vertex partitioned matching algorithm.
An edge partitioning requires a rigorous framework for handling the
communication resulting when edges owned by multiple processors are incident on
the same vertex. Hopefully, the framework developed for representing an edge partitioned
graph facilitates the implementation of other parallel graph algorithms using an edge
partitioning approach.
Mon, 09 Dec 2013 00:00:00 GMThttp://hdl.handle.net/1956/78962013-12-09T00:00:00ZEnhancing Content Management in DPG
http://hdl.handle.net/1956/7794
Enhancing Content Management in DPG
Pino Arevalo, Ana Gabriela
Master thesis
This thesis analyzes the usability aspects of PCE
and implements a new Single Page Application that
attempts to solve this issues
Wed, 20 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/77942013-11-20T00:00:00ZGrein. A New Non-Linear Cryptoprimitive
http://hdl.handle.net/1956/7779
Grein. A New Non-Linear Cryptoprimitive
Thorsen, Ole Rydland
Master thesis
In this thesis, we will study a new stream cipher, Grein, and a new cryptoprimitive used in this cipher. The second chapter gives a brief introduction to cryptography in general. The third chapter looks at stream ciphers in general, and explains the advantages and disadvantages of stream ciphers compared to block ciphers. In the fourth chapter the most important building blocks used in stream ciphers are explained. The reader is excepted to know elementary abstract algebra, as much of the results in this chapter depend on it. In the fifth chapter, the stream cipher Grain is introduced. In chapter six, the new stream cipher, Grein, is introduced. Here, we look at the different components used in the cipher, and how they operate together. In chapter seven, we introduce an alteration to the Grein cryptosystem, which hopefully have some advantages
Tue, 10 Dec 2013 00:00:00 GMThttp://hdl.handle.net/1956/77792013-12-10T00:00:00ZWSDL Workshop: Semantic web application in HTML5 for the discovery, construction and analysis of workfows
http://hdl.handle.net/1956/7724
WSDL Workshop: Semantic web application in HTML5 for the discovery, construction and analysis of workfows
Cañadas, Rafael Adolfo Nozal
Master thesis
<p>WSDL-Workshop is a HTML5 web application for the discovery and exploration
of web services and for analyzing the compatibility between web services. This
is the result of a mathematical model developed from WSDL1.1. The program
provides a graphical user interface and let the user build a work ow composed
of services described with WSDL1.1 and tells if:</p>
<p><ul><li>An output is compatible with an input.</li>
<li>It is correct to link an output with an input.</li>
<li> It is correct to link a given operation after another.</li>
<li>If many operations correctly linked together still make sense as a group.</li></ul></p>
<p>In order to do that the WSDLs must have semantic annotations so the
computer can recognize what is the purpose of certain data or operation. WSDLWorkshop
uses the EDAM Ontology as a reference for semantic concepts.</p>
<p>In the discovery aspect; for a given set of WSDLs you can nd services by
ltering by operation name, input or output names, or semantic annotations.
For a given operation output it can also lter by WSDL which have inputs which
are correct to link with that output. For a given operation it can also lter by
operations which are correct to link after.</p>
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/77242013-01-01T00:00:00ZTournaments and Optimality: New Results in Parameterized Complexity
http://hdl.handle.net/1956/7650
Tournaments and Optimality: New Results in Parameterized Complexity
Pilipczuk, Michal Pawel
Doctoral thesis
Fri, 22 Nov 2013 00:00:00 GMThttp://hdl.handle.net/1956/76502013-11-22T00:00:00ZDependencies: No Software is an Island
http://hdl.handle.net/1956/7540
Dependencies: No Software is an Island
Tellnes, Jørgen
Master thesis
In the past years, package managers, application frameworks and open-source libraries have made it vastly simpler and faster to get functioning software up and running, while cloud providers and external service providers have made it easier to get the application out into the hands of millions of users without large up-front costs.
While this recent technology development has made it possible for companies with limited resources to build impressive software and valuable services, the development has serious security implications which the current state of software development and systems engineering are not yet able to handle very well.
In this thesis, we will show that the security and availability of a system are largely determined by the surrounding "ecosystem" of dependencies, and that techniques to reduce the reliance on a system's dependencies-software libraries, services and infrastructures-are hugely beneficial.
The intended audience for this thesis are computer scientists, professional and amateur software developers, and system designers, but anyone with basic IT knowledge is encouraged to keep reading.
Tue, 15 Oct 2013 00:00:00 GMThttp://hdl.handle.net/1956/75402013-10-15T00:00:00ZTowards Privacy Managment of Information Systems
http://hdl.handle.net/1956/7539
Towards Privacy Managment of Information Systems
Drageide, Vidar
Master thesis
This masters thesis provides insight into the concept of privacy. It argues why privacy is important, and why developers and system owners should keep privacy in mind when developing and maintaining systems containing personal information. Following this, a strategy for evaluating the overall level of privacy in a system is defined. The strategy is then applied to parts of the cellphone system in an attempt to evaluate the privacy of traffic and location data in this system.
Tue, 02 Jun 2009 00:00:00 GMThttp://hdl.handle.net/1956/75392009-06-02T00:00:00ZA polynomial-time algorithm for LO based on generalized logarithmic barrier functions
http://hdl.handle.net/1956/7473
A polynomial-time algorithm for LO based on generalized logarithmic barrier functions
El Ghami, Mohamed; Ivanov, I.D.; Roos, C.; Steihaug, Trond
Peer reviewed; Journal article
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/1956/74732008-01-01T00:00:00ZA type system for counting instances of software components
http://hdl.handle.net/1956/7467
A type system for counting instances of software components
Bezem, Marcus A.; Hovland, Dag; Truong, Anh Hoang
Peer reviewed; Journal article
We identify an abstract language for component software based on process algebra. Besides the usual operators for sequential, alternative and parallel composition, it has primitives for instantiating components and for deleting instances of components. We define an operational semantics for our language and give a type system in which types express quantitative information on the components involved in the execution of the expressions of the language. Included in this information is for each component the maximum number of instances that are simultaneously active during the execution of the expression. The type system is compositional by the novel use of ‘deficit types’. The type inference algorithm runs in time quadratic in the size of the input. We consider extensions of the language with loops and tail recursion, and with a scope mechanism. We illustrate the approach with some examples, one on UML diagram refinement and one on counting objects on the free store in C++.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/74672012-01-01T00:00:00ZFiltering duplicate reads from 454 pyrosequencing data
http://hdl.handle.net/1956/7458
Filtering duplicate reads from 454 pyrosequencing data
Balzer, Susanne Mignon; Malde, Ketil; Grohme, Markus A.; Jonassen, Inge
Peer reviewed; Journal article
<p><strong>Motivation:</strong> Throughout the recent years, 454 pyrosequencing has
emerged as an efficient alternative to traditional Sanger sequencing
and is widely used in both de novo whole-genome sequencing and
metagenomics. Especially the latter application is extremely sensitive
to sequencing errors and artificially duplicated reads. Both are
common in 454 pyrosequencing and can create a strong bias in the
estimation of diversity and composition of a sample. To date, there
are several tools that aim to remove both sequencing noise
and duplicates. Nevertheless, duplicate removal is often based on
nucleotide sequences rather than on the underlying flow values,
which contain additional information.</p>
<p><strong>Results:</strong> With the novel tool JATAC, we present an approach towards
a more accurate duplicate removal by analysing flow values directly.
Making use of previous findings on 454 flow data characteristics,
we combine read clustering with Bayesian distance measures.
Finally, we provide a benchmark with an existing algorithm.</p>
<p><strong>Availability:</strong> JATAC is freely available under the General Public
License from <a href="http://malde.org/ketil/jatac/" target="blank">http://malde.org/ketil/jatac/</a></p>.
<p><strong>Supplementary information:</strong> Supplementary data are available at
Bioinformatics online</p>
Tue, 01 Jan 2013 00:00:00 GMThttp://hdl.handle.net/1956/74582013-01-01T00:00:00ZSystematic exploration of error sources in pyrosequencing flowgram data
http://hdl.handle.net/1956/7457
Systematic exploration of error sources in pyrosequencing flowgram data
Balzer, Susanne Mignon; Malde, Ketil; Jonassen, Inge
Peer reviewed; Journal article
<p><strong>Motivation:</strong> 454 pyrosequencing, by Roche Diagnostics, has
emerged as an alternative to Sanger sequencing when it comes to
read lengths, performance and cost, but shows higher per-base error
rates. Although there are several tools available for noise removal,
targeting different application fields, data interpretation would benefit
from a better understanding of the different error types.</p>
<p><strong>Results:</strong> By exploring 454 raw data, we quantify to what extent
different factors account for sequencing errors. In addition to the
well-known homopolymer length inaccuracies, we have identified
errors likely to originate from other stages of the sequencing process.
We use our findings to extend the flowsim pipeline with functionalities
to simulate these errors, and thus enable a more realistic simulation
of 454 pyrosequencing data with flowsim.</p>
<p><strong>Availability:</strong> The flowsim pipeline is freely available under the
General Public License from <a href="http://biohaskell.org/Applications/FlowSim" target="blank">http://biohaskell.org/Applications/FlowSim</a></p>
ISMB/ECCB 2011 PROCEEDINGS PAPERS COMMITTEE JULY 17 TO JULY 19, 2011, VIENNA, AUSTRIA.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/74572011-01-01T00:00:00ZCharacteristics of 454 pyrosequencing data—enabling realistic simulation with flowsim
http://hdl.handle.net/1956/7456
Characteristics of 454 pyrosequencing data—enabling realistic simulation with flowsim
Balzer, Susanne Mignon; Malde, Ketil; Lanzén, Anders; Sharma, Animesh; Jonassen, Inge
Peer reviewed; Journal article
<p><strong>Motivation:</strong> The commercial launch of 454 pyrosequencing in 2005was a milestone in genome sequencing in terms of performance and
cost. Throughout the three available releases, average read lengths
have increased to ∼500 base pairs and are thus approaching read
lengths obtained from traditional Sanger sequencing. Study design
of sequencing projects would benefit from being able to simulate
experiments.</p>
<p><strong>Results:</strong> We explore 454 raw data to investigate its characteristics
and derive empirical distributions for the flow values generated by
pyrosequencing. Based on our findings, we implement Flowsim,
a simulator that generates realistic pyrosequencing data files of
arbitrary size from a given set of input DNA sequences. We finally
use our simulator to examine the impact of sequence lengths on the
results of concrete whole-genome assemblies, and we suggest its
use in planning of sequencing projects, benchmarking of assembly
methods and other fields.</p>
<p><strong>Availability:</strong> Flowsim is freely available under the General Public
License from <a href="http://blog.malde.org/index.php/flowsim/" target="blank">http://blog.malde.org/index.php/flowsim/</a></p>
ECCB 2010 CONFERENCE PROCEEDINGS SEPTEMBER 26 TO SEPTEMBER 29, 2010, GHENT, BELGIUM.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/74562010-01-01T00:00:00ZCharacteristics of Pyrosequencing Data – Analysis, Methods, and Tools
http://hdl.handle.net/1956/7455
Characteristics of Pyrosequencing Data – Analysis, Methods, and Tools
Balzer, Susanne Mignon
Doctoral thesis
The introduction of this thesis provides background knowledge on the 454 sequencing
technology and a detailed review of the most relevant sequencing artifacts.
Chapter 1 puts the 454 sequencing technology into a historical context. Chapter
2 gives an overview of where 454 sequencing is applied, focusing on the most common
application areas. Chapter 3 provides a detailed description of how 454 sequencing
works, from library preparation to sequencing, imaging and data output.
Here, the distinction between the different detail levels of sequencing information is
crucial since data aggregation involves information loss. Chapter 4 describes where
errors and artifacts can arise, how they are manifested in the sequencing data, and
what impact they can have on downstream analyses. Finally, Chapter 5 puts the
contributions into their respective analytical contexts and discusses their relevance
for the research community.
The first paper, published in Bioinformatics in September 2010 and presented at
the European Conference on Computational Biology (ECCB) in Belgium the same
year, comprises of the exploration, modeling and simulation of 454 data. Under
the title “Characteristics of 454 pyrosequencing data – enabling realistic simulation
with Flowsim”, we present a detailed analysis of sequencing data and a simulation
tool that facilitates the design of sequencing projects. The tool can be used to
examine and quantify the impact of read length, coverage, sequencing errors and
signal degradation on genome assembly. Furthermore, it enables the testing and
benchmarking of known and novel algorithms, methods and tools in a number of
application areas such as whole genome assembly, read alignment, read correction,
single-nucleotide polymorphism (SNP) identification and metagenomics.
The second paper, “Systematic exploration of error sources in pyrosequencing
flowgram data”, was published in Bioinformatics in July 2011 and presented at
the Intelligent Systems for Molecular Biology (ISMB)/ECCB conference in Austria
the same year. We added several features and modules to the existing simulation
pipeline. Those were based on the observation of several error sources such as copy ing errors introduced through polymerase chain reaction (PCR), a method used in
454 sequencing for amplification of the templates. These errors appear as mutations
and are virtually impossible to distinguish from true sequence variants.
Similar to the second paper, the third paper, “Filtering duplicate reads from
454 pyrosequencing data”, focuses on a single error type, namely artificially duplicated
reads. Our JATAC tool enables removal of this artifact on the most detailed
sequencing data level, outperforming existing tools. The paper was published in
Bioinformatics in April 2013.
Mon, 17 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/74552013-06-17T00:00:00ZQALM - a tool for automating quantitative analysis of LC-MS-MS/MS data
http://hdl.handle.net/1956/7369
QALM - a tool for automating quantitative analysis of LC-MS-MS/MS data
Lerøy, Kjartan
Master thesis
The goal of bioinformatics is to support science and research in the field of biology through the application of information technology. Proteomics is a field within biology that deals with the study of proteins. This paper describes QALM, an application developed to automate and simplify a specific type of proteomics analysis. QALM is first and foremost a proof of concept through which certain options for implementing such automation have been explored. Although a functional and usable application has been created, this should primarily be considered a stepping stone for similar applications in the future. Currently QALM is a desktop tool for importing and exporting data, inte- grating and communicating with external systems for the analysis of such data, and finally generating reports to present the results. It currently runs only un- der the Linux operating system, but it should be possible to change this fairly easily.
Mon, 31 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/73692010-05-31T00:00:00ZPrediction of Polycomb/Trithorax Response Elements using Support Vector Machines
http://hdl.handle.net/1956/7222
Prediction of Polycomb/Trithorax Response Elements using Support Vector Machines
Bredesen, Bjørn Andre
Master thesis
Polycomb/Trithorax Response Elements (PREs) are epigenetic elements that can maintain established transcriptional states over multiple cell divisions. Sequence motifs in known PREs have enabled genome-wide PRE prediction by the PREdictor and jPREdictor, using combined motif occurrences for scoring sequence windows. The EpiPredictor predicts PREs by using the method of Support Vector Machines (SVM), which enables the construction of non-linear classifiers by use of kernel functions. Aspects of using SVMs for PRE prediction can be investigated, such as setting of SVM parameters, using SVM decision values for scoring and using alternative feature sets.
The PRE prediction implementation presented in this thesis, called PRESVM, uses SVM decision values to score sequence windows. PRESVM implements the feature sets used by (j)PREdictor and EpiPredictor, as well as feature sets using relative motif occurrence distances and periodic motif occurrence. Grid search and Particle Swarm Optimization are supported for setting SVM parameters.
For evaluating PRE predictions of multiple classifiers against experimental data sets, an application called PREsent has been implemented.
For a similar configuration for PRESVM and jPREdictor, PRESVM predicted a larger number of candidate PREs, which were more sensitive to but had lower Positive Predictive Values against experimental data considered than those of jPREdictor. A formal relationship was established between the PRESVM and jPREdictor decision functions for this configuration. The trade-offs make it difficult to conclude that either classifier is superior. Many configurations remain to be tested, and the results encourage further testing.
Mon, 03 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/72222013-06-03T00:00:00ZRegulatory mechanisms of non-coding RNAs during zebrafish embryogenesis
http://hdl.handle.net/1956/7202
Regulatory mechanisms of non-coding RNAs during zebrafish embryogenesis
Nepal, Chirag
Doctoral thesis
<p>For many years, RNAs were thought to be intermediate products between DNA and
protein. The discovery of RNA interference (RNAi), a regulatory process that uses
small non-coding RNAs to regulate gene expression at the post-transcriptional level,
changed our view about RNAs. However, the discovery of microRNAs was the
realization of RNAs as the regulatory elements. In recent years, many highthroughput
sequencing studies have identified hundreds to thousands of various
kinds of non-coding RNAs. The existence and biological relevance of these noncoding
RNAs detected in large-scale analysis of human tissues have not yet been
characterized in a vertebrate animal in vivo. To gain insight into the existence and
biological relevance of these non-coding RNAs in vertebrate animal in vivo, we have
set out to generate the first global description of TSS usage during key stages of
vertebrate embryonic development at single nucleotide resolution. We have coupled
CAGE maps to protein-coding and non-coding transcripts by RNA sequencing
(providing a quantitative description of TSS usage on a genome scale) and anchored
to posttranslational histone modifications (H3K4me3) by ChIP sequencing.</p>
<p>We reveal an extraordinary dynamics of promoter usage that takes place during
development of the vertebrate embryo. We showed that the onset of transcription
and subsequent differentiation of the embryo is characterized by the developmentally
regulated appearance of 5’-ends of intragenic RNAs on many genes, and of an entire
hitherto unknown layer of RNA species overlapping known genes and having specific
signatures occurring in exons, introns and 3’-UTRs of developmentally active genes.
We characterize the pervasive production of intragenic processed RNAs including
exonic and intron-5’ end specific RNAs and provide the first indication for the
biological processes in which they may function. Notably, intron 5’ end associated
non-coding RNAs are active zygotically and restricted to genes that encode RNA
processing and the splicing proteins in both fish and human. We demonstrated
evidence that exonic RNAs are produced by a non-canonical posttranscriptional
mechanism independent of the gene 5’ end. We show the initiation landscape and
developmental dynamics of lincRNAs; we show the evolutionary conserved process
of developmentally regulated posttranscriptional processing of lincRNAs into
intragenic RNAs, which demonstrate the utility of zebrafish in studying mammalian
lincRNA processing.</p>
<p>The main aim of this work was to provide a (currently non-existent) annotation of miRNA promoters and characterize their common characteristics features at
transcription, post transcription and chromatin level. We describe the first genomewide
identification of miRNA promoters in zebrafish active during the early embryonic
developmental stages. We identified a small number of maternally transcribed
miRNAs, one MBT specific miRNA and the majority that are zygotically transcribed.
We report the first evidence of moRNAs in zebrafish and pufferfish that were
previously reported in human and Ciona intestinalis. We show evidence for
unexpected enrichment of pre-miRNA sites with promoter-associated histone
modification marks (H3K4me3 and H2A.Z) suggesting chromatin regulation and
potential involvement of transcription machinery in pre-miRNA processing,
suggesting co-transcriptional splicing of pre-miRNAs and pri-miRNA.</p>
<p>We have provided a catalogue of intermediate-sized non-coding RNAs in zebrafish,
by making RNA library enriched for intermediate-sized (50-500 nt) non-coding RNAs,
collected from zebrafish larvae (5-7 days post fertilization). In particular, we validated
most annotated snoRNAs and identified few hundreds of novel snoRNAs making the
most comprehensive annotations of zebrafish snoRNAs. Host genes for most
snoRNAs showed no evidence for independent transcription of snoRNAs, suggesting
they are co-transcribed by host genes. Interestingly, host (coding and non-coding)
genes require non-canonical transcription initiation machinery, as indicated by TCT
initiation signals, that is associated with translation machinery. 5’-end of many
snoRNAs overlaps with CAGE 5’-ends, suggesting either they are capped or
undergo post-transcriptional modification, which is also evolutionary conserved in
human snoRNAs. Small RNAs derived from snoRNAs are generated from most
snoRNAs and provide first evidence of sd-snoRNAs produced in oocytes, suggesting
their potential importance during early embryogenesis.</p>
Fri, 03 May 2013 00:00:00 GMThttp://hdl.handle.net/1956/72022013-05-03T00:00:00ZUtvidelse og formell sikkerhetsanalyse av Dynamic Presentation Generator
http://hdl.handle.net/1956/7191
Utvidelse og formell sikkerhetsanalyse av Dynamic Presentation Generator
Vines, Aleksander
Master thesis
Oppgaven omhandler sikkerhetsproblematikk i Dynamic
Presentation Generator og undersøker muligheten for
å bruke en single-sign-on via Mi side.
Thu, 02 May 2013 00:00:00 GMThttp://hdl.handle.net/1956/71912013-05-02T00:00:00ZSketch-based Storytelling for Cognitive Problem Solving: Externalization, Evaluation, and Communication in Geology
http://hdl.handle.net/1956/7175
Sketch-based Storytelling for Cognitive Problem Solving: Externalization, Evaluation, and Communication in Geology
Lidal, Endre Mølster
Doctoral thesis
<p>PROBLEM solving is an important part of all engineering and scientific activities. It
is present, for instance, when experts want to develop more fuel-efficient cars or
when they are searching for oil and gas in the subsurface. Many alternatives have to be
examined and evaluated before the optimal solution is found. Solving such problems is
not only performed inside the mind of the scientist, but it is also an interaction between
mind and scribbles, sketches, or visualizations on papers, on blackboards, and on computers.
For problem solving in expert teams, this externalization through sketches and
visualizations also plays an important communicative role.</p>
<p>This dissertation presents research for assisting the problem-solving process on the
computer, through novel technological advances in the fields of illustrative visualization
and sketch-based modeling. Specifically, it targets problems that are related to
evolutionary processes. Firstly, inspired by storytelling, the domain experts can express
their ideas for solution as stories. These stories are based on sketches that the
experts draw, utilizing a novel temporal-sketching interface inspired by a flip-over canvas
metaphor. Further, the dissertation describes a set of sketching proxy geometries,
such as the box-proxy geometry, that the experts can take advantage of when drawing
three-dimensional (3D) sketches. These proxy geometries support the task of mapping
a two-dimensional input (2D), e.g., a mouse or a digitizer tablet, to a 3D sketch. Solving
difficult problems require that many different solutions are evaluated to identify the
most optimal one. This dissertation introduces the story-tree, a tree-graph data structure
and visualization, which manages and provides access to an ensemble of alternative
stories. The story-tree also provides an interface where the stories can be evaluated and
compared. This playback of the stories is done through automatic animations of the
2D sketches. The third challenge addressed in this dissertation is to communicate the
optimal solution to decision-makers and laymen. By combining the animated 2D story
sketches with illustrative visualization techniques it is possible to automatically synthesize
and animate 3D models. These animations can be combined with new cutaway
visualization techniques to reveal features hidden inside such 3D models.</p>
<p>All of these contributions have been investigated in the context of the problemsolving
tasks relevant to the early phase of petroleum exploration. This phase is characterized
by having very little ground-through data available. Thus, a large solution
space needs to be explored. Even so, the geologists need to produce models that can
predict if petroleum is present. In addition to working with few data, the geologists also
work under heavy time constraints because of the competition between the oil companies
exploring the same area. The contributions from this dissertation have created
enthusiasm among the domain experts and already, a new research initiative has materialized from the work described in this dissertation. Based on the feedback from the
domain experts, we can conclude that the contributions presented in this dissertation
form a valuable step towards better tools for problem solving, involving the computer,
for the domain investigated here.</p>
Tue, 25 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/71752013-06-25T00:00:00ZXHM: A system for detection of potential cross hybridizations in DNA microarrays
http://hdl.handle.net/1956/7174
XHM: A system for detection of potential cross hybridizations in DNA microarrays
Flikka, Kristian; Yadetie, Fekadu; Lægreid, Astrid; Jonassen, Inge
Journal article; Peer reviewed
<p>Background: Microarrays have emerged as the preferred platform for high throughput gene
expression analysis. Cross-hybridization among genes with high sequence similarities can be a
source of error reducing the reliability of DNA microarray results.</p> <p>Results: We have developed a tool called XHM (cross hybridization on microarrays) for
assessment of the reliability of hybridization signals by detecting potential cross-hybridizations on
DNA microarrays. This is done by comparing the sequences of the probes against an extensive
database representing the transcriptome of the organism in question. XHM is available online at
http://www.bioinfo.no/tools/xhm/.</p><p>Conclusions: Using XHM with its user-adjustable parameters will enable scientists to check their
lists of differentially expressed genes from microarray experiments for potential crosshybridizations.
This provides information that may be useful in the validation of the microarray
results.</p>
Fri, 27 Aug 2004 00:00:00 GMThttp://hdl.handle.net/1956/71742004-08-27T00:00:00ZNovice Difficulties with Language Constructs
http://hdl.handle.net/1956/7167
Novice Difficulties with Language Constructs
Rosbach, Alexander Hoem
Master thesis
Programming is a difficult skill to learn, and
programming courses have
high dropout rates. In this thesis we study the
problems that students have
during their first introductory programming course
at The University of
Bergen. We inspect the solutions that they submit
for the given assignments,
and look at the frequency of the different kinds
of mistakes in their work.
We present a problem taxonomy that we use to
classify the mistakes
found to be the most common, and conclude that a
significant part of
the problems are observable misconceptions. We
introduce a web-based
tool, Javis, that we have developed to aid the
students with these kinds of
problems.
Based on the experience and knowledge gained
during this work we
present a proposal of a grading by annotation
scheme. This scheme is specif-
ically designed to increase the quality of the
feedback given to students
on their submitted work and provide valuable
feedback to the teachers
regarding the problems that their students have.
Thu, 01 Aug 2013 00:00:00 GMThttp://hdl.handle.net/1956/71672013-08-01T00:00:00ZAn implementation of a Feedback Vertex Set algorithm.
http://hdl.handle.net/1956/7082
An implementation of a Feedback Vertex Set algorithm.
Sivertsen, Arvid Soldal
Master thesis
An implementation, improvements to implementation
and empirical results.
Feedback Vertex Set on undirected and unweighted
graphs.
Mon, 03 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/70822013-06-03T00:00:00ZNetwork coding in Bluetooth networks
http://hdl.handle.net/1956/7061
Network coding in Bluetooth networks
Stenvoll, Roger
Master thesis
This thesis discusses the possibility to apply network coding to a Bluetooth piconet. A protocol is proposed. This protocol is based on using deterministic linear network coding. The proposed alphabet size is binary, and the encoding equation is a trivial parity check code. By using the proposed code the encoding scale easily by the number of source nodes in the network, and do not require exchange of coding equations. The encoding and decoding is performed using bitwise XOR of the packets, and does not require any pre-computed look-up table, nor a great amount of dedicated memory to store intermediate packets. Finally, the encoding and decoding is not computational hard. Network coding applied as the proposed protocol is only beneficial to the master node and the communication from the master node to the slave nodes. Furthermore, it does not give any erroneous protection. Information exchanged in the network will be easier available to all the slave nodes in the network, and would require system to maintain confidentiality if this is required. However, this is trivially achieved in a Bluetooth network without network coding as well, and the same countermeasure should be enforced in all Bluetooth network. Not only when applying network coding. A theoretical study of the proposed algorithm shows a gain in throughput, and reduced power consumption. These features are appreciated by computational and power challenged nodes. This efficiency is maximized when there are few source nodes in the network, and large frame sizes (DH5). The theoretical study is verified by a simulator designed to this purpose.
Thu, 01 Oct 2009 00:00:00 GMThttp://hdl.handle.net/1956/70612009-10-01T00:00:00ZData Profiling to Reveal Meaningful Structures for Standardization
http://hdl.handle.net/1956/7058
Data Profiling to Reveal Meaningful Structures for Standardization
Nyero, Walter
Master thesis
Today many organisations and enterprises are using data from several sources either for strategic decision making or other business goals such as data integration. Data quality problems are always a hindrance to effective and efficient utilization of such data. Tools have been built to clean and standardize data, however, there is a need to pre-process this data by applying techniques and processes from statistical semantics, NLP, and lexical analysis. Data profiling employed these techniques to discover, reveal commonalties and differences in the inherent data structures, present ideas for creation of unified data model, and provide metrics for data standardization and verification. The IBM WebSphere tool was used to pre-process dataset/records by design and implementation of rule sets which were developed in QualityStage and tasks which were created in DataStage. Data profiling process generated set of statistics (frequencies), token/phrase relationships (RFDs, GRFDs), and other findings in the dataset that provided an overall view of the data source's inherent properties and structures. The examination of data ( identifying violations of the normal forms and other data commonalities) from a dataset and collecting the desired information provided useful statistics for data standardization and verification by enable disambiguation and classification of data.
Fri, 20 Nov 2009 00:00:00 GMThttp://hdl.handle.net/1956/70582009-11-20T00:00:00ZObscurance-based Volume Rendering Framework
http://hdl.handle.net/1956/6942
Obscurance-based Volume Rendering Framework
Ruiz, Marc; Boada, Imma; Viola, Ivan; Bruckner, Stefan; Feixas, Miquel; Sbert, Mateu
Peer reviewed; Conference object
lighting effects in a faster way than global illumination. Its application in volume visualization is of special interest
since it permits us to generate a high quality rendering at a low cost. In this paper, we propose an obscurancebased
framework that allows us to obtain realistic and illustrative volume visualizations in an interactive manner.
Obscurances can include color bleeding effects without additional cost. Moreover, we obtain a saliency map from
the gradient of obscurances and we show its application to enhance volume visualization and to select the most
salient views.
IEEE/ EG Symposium on Volume and Point-Based Graphics (2008)
H.- C. Hege, D. Laidlaw, R. Pajarola, O. Staadt (Editors)
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/1956/69422008-01-01T00:00:00ZA New Generating Set Search Algorithm for Partially Separable Functions
http://hdl.handle.net/1956/6941
A New Generating Set Search Algorithm for Partially Separable Functions
Frimannslund, Lennart; Steihaug, Trond
Peer reviewed; Conference object
A new derivative-free optimization method for unconstrained optimization of partially separable functions is presented. Using average curvature information computed from sampled function values the method generates an average Hessian-like matrix and uses its eigenvectors as new search directions. For partially separable functions, many of the entries of this matrix will be identically zero. The method is able to exploit this property and as a consequence update its search directions more often than if sparsity is not taken into account. Numerical results show that this is a more effective method for functions with a topography which requires frequent updating of search directions for rapid convergence. The method is an important extension of a method for nonseparable functions previously published by the authors. This new method allows for problems of larger dimension to be solved, and will in most cases be more efficient.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/69412010-01-01T00:00:00ZData structure, Access and Presentation in Web-GIS for marine research
http://hdl.handle.net/1956/6784
Data structure, Access and Presentation in Web-GIS for marine research
Grønning, Torgeir Mossige
Master thesis
A prototype Web--GIS system has been constructed as a
replacement for the ageing ODB system. It consists of a
software stack with PostGIS as a data store, GeoServer as a
data accessor and a client implemented in JavaScript with
HTML5/CSS3. The client utilises the OpenLayers JavaScript
library, as well as other JavaScript utility libraries. The
application is compliant with current standards for storing
and presenting and communicating geographic data, as well as
current standards in web development. The most central
geospatial standards employed are the OGC standards SFA, WMS
and WFS.
The utilised software, standards, work process and
experiences acquired in the construction of this system
system are described and documented in the thesis. As such,
the thesis may provide findings and advice useful for
carrying out similar or related projects.
Sun, 02 Jun 2013 00:00:00 GMThttp://hdl.handle.net/1956/67842013-06-02T00:00:00ZExploring the evolution of protein function in Archaea
http://hdl.handle.net/1956/6646
Exploring the evolution of protein function in Archaea
Goncearenco, Alexander; Berezovsky, Igor N.
Journal article
<p>Background: Despite recent progress in studies of the evolution of protein function, the questions what were the first functional protein domains and what were their basic building blocks remain unresolved. Previously, we introduced the concept of elementary functional loops (EFLs), which are the functional units of enzymes that provide elementary reactions in biochemical transformations. They are presumably descendants of primordial catalytic peptides.</p> <p>Results: We analyzed distant evolutionary connections between protein functions in Archaea based on the EFLs comprising them. We show examples of the involvement of EFLs in new functional domains, as well as reutilization of EFLs and functional domains in building multidomain structures and protein complexes.</p> <p>Conclusions: Our analysis of the archaeal superkingdom yields the dominating mechanisms in different periods of protein evolution, which resulted in several levels of the organization of biochemical function. First, functional domains emerged as combinations of prebiotic peptides with the very basic functions, such as nucleotide/phosphate and metal cofactor binding. Second, domain recombination brought to the evolutionary scene the multidomain proteins and complexes. Later, reutilization and de novo design of functional domains and elementary functional loops complemented evolution of protein function.</p>
Wed, 30 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/66462012-05-30T00:00:00ZSimilarity-based Exploded Views
http://hdl.handle.net/1956/6594
Similarity-based Exploded Views
Ruiz, Marc; Viola, Ivan; Boada, Imma; Bruckner, Stefan; Feixas, Miquel; Sbert, Mateu
Chapter; Peer reviewed
Exploded views are often used in illustration to overcome the
problem of occlusion when depicting complex structures. In this paper,
we propose a volume visualization technique inspired by exploded views
that partitions the volume into a number of parallel slabs and shows
them apart from each other. The thickness of slabs is driven by the similarity
between partitions. We use an information-theoretic technique for
the generation of exploded views. First, the algorithm identifies the viewpoint
from which the structure is the highest. Then, the partition of the
volume into the most informative slabs for exploding is obtained using
two complementary similarity-based strategies. The number of slabs and
the similarity parameter are freely adjustable by the user.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/1956/65942008-01-01T00:00:00ZFloating Fault Analysis of Trivium
http://hdl.handle.net/1956/6591
Floating Fault Analysis of Trivium
Hojsík, Michal; Rudolf, Bohuslav
Chapter; Peer reviewed
One of the eSTREAM final portfolio ciphers is the hardwareoriented
stream cipher Trivium. It is based on 3 nonlinear feedback shift
registers with a linear output function. Although Trivium has attached
a lot of interest, it remains unbroken by passive attacks.
At FSE 2008 a differential fault analysis of Trivium was presented. It is
based on the fact that one-bit fault induction reveals many polynomial
equations among which a few are linear and a few quadratic in the inner
state bits. The attack needs roughly 43 induced one-bit random faults
and uses only linear and quadratic equations.
In this paper we present an improvement of this attack. It requires only
3.2 one-bit fault injections in average to recover the Trivium inner state
(and consequently its key) while in the best case it succeeds after 2
fault injections. We termed this attack floating fault analysis since it
exploits the floating model of the cipher. The use of this model leads to
the transformation of many obtained high-degree equations into linear
equations.
The presented work shows how a change of the cipher representation
may result in much better attack.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/1956/65912008-01-01T00:00:00ZDifferential Fault Analysis of Trivium
http://hdl.handle.net/1956/6590
Differential Fault Analysis of Trivium
Hojsík, Michal; Rudolf, Bohuslav
Chapter; Peer reviewed
Trivium is a hardware-oriented stream cipher designed in 2005
by de Canni`ere and Preneel for the European project eStream, and it has
successfully passed the first and the second phase of this project. Its design
has a simple and elegant structure. Although Trivium has attached
a lot of interest, it remains unbroken.
In this paper we present differential fault analysis of Trivium and propose
two attacks on Trivium using fault injection.We suppose that an attacker
can corrupt exactly one random bit of the inner state and that he can
do this many times for the same inner state. This can be achieved e.g.
in the CCA scenario. During experimental simulations, having inserted
43 faults at random positions, we were able to disclose the trivium inner
state and afterwards the private key.
As far as we know, this is the first time differential fault analysis is applied
to a stream cipher based on shift register with non-linear feedback.
Tue, 01 Jan 2008 00:00:00 GMThttp://hdl.handle.net/1956/65902008-01-01T00:00:00ZTesting with Concepts and Axioms in C++
http://hdl.handle.net/1956/6555
Testing with Concepts and Axioms in C++
Bagge, Anya Helene; David, Valentin; Haveraaen, Magne
Research report
Modern development practices encourage extensive testing of code while it is still under development,
using unit tests to check individual code units in isolation. Such tests are typically case-based,
checking a likely error scenario or an error that has previously been identified and fixed. Coming up
with good test cases is challenging, and focusing on individual tests can distract from creating tests that
cover the full functionality.
Axioms, known from program specification, allow for an alternative way of generating test cases,
where the intended functionality is described as rules or equations that can be checked automatically.
Axioms are proposed as part of the concept feature of the upcoming C++0x standard.
In this paper, we describe how tests may be generated automatically from axioms in C++ concepts,
and supplied with appropriate test data to form effective automated unit tests.
Wed, 01 Oct 2008 00:00:00 GMThttp://hdl.handle.net/1956/65552008-10-01T00:00:00ZQuantum social networks
http://hdl.handle.net/1956/6553
Quantum social networks
Cabello, Adán; Danielsen, Lars Eirik; López-Tarrida, Antonio J.; Portillo, José R.
Peer reviewed; Journal article
We introduce a physical approach to social networks (SNs) in which each actor is characterized by a yes–no test on a physical system. This allows us to consider SNs beyond those originated by interactions based on pre-existing properties, as in a classical SN (CSN). As an example of SNs beyond CSNs, we introduce quantum SNs (QSNs) in which actor i is characterized by a test of whether or not the system is in a quantum state |ψi〉. We show that QSNs outperform CSNs for a certain task and some graphs. We identify the simplest of these graphs and show that graphs in which QSNs outperform CSNs are increasingly frequent as the number of vertices increases. We also discuss more general SNs and identify the simplest graphs in which QSNs cannot be outperformed.
Wed, 27 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/65532012-06-27T00:00:00ZOn the Classification of Hermitian Self-Dual Additive Codes over GF(9)
http://hdl.handle.net/1956/6541
On the Classification of Hermitian Self-Dual Additive Codes over GF(9)
Danielsen, Lars Eirik
Peer reviewed; Journal article
Additive codes over GF(9) that are self-dual with
respect to the Hermitian trace inner product have a natural application
in quantum information theory, where they correspond
to ternary quantum error-correcting codes. However, these codes
have so far received far less interest from coding theorists than
self-dual additive codes over GF(4), which correspond to binary
quantum codes. Self-dual additive codes over GF(9) have been
classified up to length 8, and in this paper we extend the complete
classification to codes of length 9 and 10. The classification is
obtained by using a new algorithm that combines two graph
representations of self-dual additive codes. The search space is
first reduced by the fact that every code can be mapped to a
weighted graph, and a different graph is then introduced that
transforms the problem of code equivalence into a problem of
graph isomorphism. By an extension technique, we are able to
classify all optimal codes of length 11 and 12. There are 56 005 876
(11; 311; 5) codes and 6493 (12; 312; 6) codes. We also find the
smallest codes with trivial automorphism group.
Wed, 01 Aug 2012 00:00:00 GMThttp://hdl.handle.net/1956/65412012-08-01T00:00:00ZOptimal preparation of graph states
http://hdl.handle.net/1956/6536
Optimal preparation of graph states
Cabello, Adán; Danielsen, Lars Eirik; López-Tarrida, Antonio J.; Portillo, José R.
Peer reviewed; Journal article
We show how to prepare any graph state of up to 12 qubits with: (a) the minimum number
of controlled-Z gates, and (b) the minimum preparation depth. We assume only one-qubit and
controlled-Z gates. The method exploits the fact that any graph state belongs to an equivalence
class under local Clifford operations. We extend up to 12 qubits the classification of graph states
according to their entanglement properties, and identify each class using only a reduced set of
invariants. For any state, we provide a circuit with both properties (a) and (b), if it does exist,
or, if it does not, one circuit with property (a) and one with property (b), including the explicit
one-qubit gates needed.
Tue, 12 Apr 2011 00:00:00 GMThttp://hdl.handle.net/1956/65362011-04-12T00:00:00ZAn improved workflow for image- and laser-based virtual geological outcrop modelling
http://hdl.handle.net/1956/6440
An improved workflow for image- and laser-based virtual geological outcrop modelling
Sima, Aleksandra A.
Doctoral thesis
<p>Photorealistic 3D models, representing an object’s surface geometry textured with
conventional photography, are used for visualization, interpretation and spatial
measurement in many disparate fields, such as cultural heritage, archaeology and
throughout the earth sciences, including geology. Virtual models of geological
outcrops allow for large quantities of geometric data, such as sizes of features,
thicknesses of strata, or surface orientations to be extracted in relatively short time
and in areas with difficult accessibility. However, standard analysis is limited to
interpretation of the three standard spectral bands (red, green, blue; RGB) acquired in
the visible spectrum by the conventional digital camera. Complementing the
photorealistic 3D outcrop models with auxiliary spectral data, for example in the
form of hyperspectral imagery, can provide domain experts with additional
geochemical information, adding great potential to studies of mineralogy and
lithology.</p> <p>The existing workflows for creation of photorealistic outcrop models and
integration with terrestrial panoramic hyperspectral data are complex and require
specific knowledge from the field of geomatics. One such processing step is selection
of images taking part in the texture mapping process. Although automated texture
mapping measures are available, in highly redundant image sets they do not
necessarily provide the best results when using all available photos. Therefore
selection of the most suitable texture candidates is required to increase the realism of
the textured models and the processing efficiency. Especially for large models of
rugged terrain, represented by millions of triangles, manual selection of the best
texture candidates can be challenging, because the user must account for occlusions
and ensure that image overlap is sufficient to cover relevant model triangles.</p> <p>The existing workflow for integration of hyperspectral and 3D data also requires
specific skills in geomatics as homologous points between the two datasets need to be
manually selected for registration. Finding such correspondences involves
interpretation of data acquired with different sensors, in different parts of the
electromagnetic spectrum, projections and resolutions. The need to complete such challenging data processing steps by users from outside the geomatics domain poses a
serious obstacle to these methods becoming standardised across geological research
and industry.</p> <p>The research presented in this thesis addressed the two aforementioned
limitations in the data processing workflows with an aim to make the method more
accessible for users from outside of the geomatics domain. Firstly, a new interactive
framework was developed, that provides analytical and graphical assistance in
selection of an image subset for geometrically optimised texturing in photorealistic
3D models. Visualisation of spatial relationships between different components of the
datasets was used to support the user’s decision in tasks requiring specific technical
background. Novel texture quality measures were proposed and new automatic image
sorting procedures, originating in computer vision and information theory, were
implemented and tested. The image subsets provided by the automatic procedures
were compared to manually selected sets and their suitability for 3D model texturing
was assessed. Results indicated that the automatic sorting algorithms can be a valid
alternative to manual methods. The resulting textured models were of comparable
quality and completeness, and the time spent in time-consuming reprocessing was
reduced. Anecdotal evidence indicated an increased user confidence in the final
textured model quality and completeness.</p> <p>Secondly, a method for semi-automatic registration of terrestrial hyperspectral
imagery with laser and image data was developed. The proposed data integration
procedure employed the Scale Invariant Feature Transform (SIFT) to automatically
find homologous points between digital RGB images registered in the scanner
coordinate system and short wave infrared cylindrical hyperspectral data. The need
for large numbers of homologous points to be matched required optimisation of the
SIFT operator, as well as a routine for eliminating false matches. The proposed
method automatically provides the control points that are used for registering the
hyperspectral imagery. The results obtained on two datasets with different
characteristics indicated that the proposed method can be used as an alternative to
manual data integration, saving time and minimizing user input during processing.</p> <p>The increased automation of the workflows for creation of photorealistic outcrop
models and integration with auxiliary image data, complemented with computer
assistance to support users’ decision in the processing steps requiring background in
geomatics, facilitate adoption of such techniques in wider community.</p>
Fri, 15 Mar 2013 00:00:00 GMThttp://hdl.handle.net/1956/64402013-03-15T00:00:00ZCoding for passive RFID communication
http://hdl.handle.net/1956/6208
Coding for passive RFID communication
Yang, Guang
Doctoral thesis
<p>This dissertation elaborates on channel coding for reliable communication
in passive RFID systems. RFID applications have been developed
and used widely. Since a passive RFID tag has no power requirements,
passive RFID has received considerable attention for application to
sensor networks, access management, etc.</p> <p>A passive RFID system transfers energy together with information
by means of inductive coupling. Coding schemes design for inductively
coupled channels is the main task of this work. Due to the properties
of inductive coupling, the communication over the inductively coupled
channel has synchronization loss problems in addition to other classical
channel errors. We therefore design codes that consider synchronization
loss, energy transfer, transmission rate, and complexity of encoding and
decoding. Because of the different properties of reader and tag, the
coding schemes for a passive RFID system are designed differently for
the two directions between a reader and a tag.</p> <p>In this dissertation, electronic circuits as infrastructure for the physical
layer of the system are described. Modulation codes, error control
codes and constrained codes, including their algebraic and structural
properties, related algorithms, and techniques are addressed. Code
combination and code power spectra are also considered as important
issues in the code design that improve the ability of error detection and
correction.</p>
Fri, 31 Aug 2012 00:00:00 GMThttp://hdl.handle.net/1956/62082012-08-31T00:00:00ZTowards Efficient Algorithms in Algebraic Cryptanalysis
http://hdl.handle.net/1956/6196
Towards Efficient Algorithms in Algebraic Cryptanalysis
Schilling, Thorsten Ernst
Doctoral thesis
Thu, 09 Aug 2012 00:00:00 GMThttp://hdl.handle.net/1956/61962012-08-09T00:00:00ZSolving Compressed Right Hand Side Equation Systems with Linear Absorption
http://hdl.handle.net/1956/6195
Solving Compressed Right Hand Side Equation Systems with Linear Absorption
Schilling, Thorsten Ernst; Raddum, Håvard
Peer reviewed; Chapter
In this paper we describe an approach for solving complex
multivariate equation systems related to algebraic cryptanalysis. The
work uses the newly introduced Compressed Right Hand Sides (CRHS)
representation, where equations are represented using Binary Decision
Diagrams (BDD). The paper introduces a new technique for manipulating
a BDD, similar to swapping variables in the well-known siftingmethod.
Using this technique we develop a new solving method for CRHS
equation systems. The new algorithm is successfully tested on systems
representing reduced variants of Trivium.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/61952012-01-01T00:00:00ZSolving Equation Systems by Agreeing and Learning
http://hdl.handle.net/1956/6194
Solving Equation Systems by Agreeing and Learning
Schilling, Thorsten Ernst; Raddum, Håvard
Peer reviewed; Chapter
We study sparse non-linear equation systems defined over a
finite field. Representing the equations as symbols and using the Agreeing
algorithm we show how to learn and store new knowledge about the system
when a guess-and-verify technique is used for solving. Experiments
are then presented, showing that our solving algorithm compares favorably
to MiniSAT in many instances.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/61942010-01-01T00:00:00ZPhase Transition in a System of Random Sparse Boolean Equations
http://hdl.handle.net/1956/6193
Phase Transition in a System of Random Sparse Boolean Equations
Schilling, Thorsten Ernst; Zajac, Pavol
Peer reviewed; Chapter
Many problems, including algebraic cryptanalysis, can be
transformed to a problem of solving a (large) system of sparse Boolean
equations. In this article we study 2 algorithms that can be used to
remove some redundancy from such a system: Agreeing, and Syllogism
method. Combined with appropriate guessing strategies, these methods
can be used to solve the whole system of equations. We show that a
phase transition occurs in the initial reduction of the randomly generated
system of equations. When the number of (partial) solutions in
each equation of the system is binomially distributed with probability
of partial solution p, the number of partial solutions remaining after the
initial reduction is very low for p’s below some threshold pt, on the other
hand for p > pt the reduction only occurs with a quickly diminishing
probability.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/61932012-01-01T00:00:00ZAnalysis of Trivium Using Compressed Right Hand Side Equations
http://hdl.handle.net/1956/6192
Analysis of Trivium Using Compressed Right Hand Side Equations
Schilling, Thorsten Ernst; Raddum, Håvard
Peer reviewed; Chapter
We study a new representation of non-linear multivariate
equations for algebraic cryptanalysis. Using a combination of multiple
right hand side equations and binary decision diagrams, our new representation
allows a very efficient conjunction of a large number of separate
equations. We apply our new technique to the stream cipher Trivium
and variants of Trivium reduced in size. By merging all equations into
one single constraint, manageable in size and processing time, we get a
representation of the Trivium cipher as one single equation.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/61922012-01-01T00:00:00ZA geometry-based generic predictor for catalytic and allosteric sites
http://hdl.handle.net/1956/6173
A geometry-based generic predictor for catalytic and allosteric sites
Mitternacht, Simon; Berezovsky, Igor N.
Journal article; Peer reviewed
An important aspect of understanding protein allostery, and of artificial effector design, is the characterization and prediction of substrate- and effector-binding sites. To find binding sites in allosteric enzymes, many of which are oligomeric with allosteric sites at domain interfaces, we devise a local centrality measure for residue interaction graphs, which behaves well for both small/monomeric and large/multimeric proteins. The measure is purely structure based and has a clear geometrical interpretation and no free parameters. It is not biased towards typically catalytic residues, a property that is crucial when looking for non-catalytic effector sites, which are potent drug targets.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/61732011-01-01T00:00:00ZMonte Carlo Study of the Formation and Conformational Properties of Dimers of Aβ42 Variants
http://hdl.handle.net/1956/6169
Monte Carlo Study of the Formation and Conformational Properties of Dimers of Aβ42 Variants
Mitternacht, Simon; Staneva, Iskra; Härd, Torleif; Irbäck, Anders
Journal article; Peer reviewed
Small soluble oligomers, and dimers in particular, of the amyloid β-peptide (Aβ) are believed to play an important pathological role in Alzheimer's disease. Here, we investigate the spontaneous dimerization of Aβ42, with 42 residues, by implicit solvent all-atom Monte Carlo simulations, for the wild-type peptide and the mutants F20E, E22G and E22G/I31E. The observed dimers of these variants share many overall conformational characteristics but differ in several aspects at a detailed level. In all four cases, the most common type of secondary structure is intramolecular antiparallel β-sheets. Parallel, in-register β-sheet structure, as in models for Aβ fibrils, is rare. The primary force driving the formation of dimers is hydrophobic attraction. The conformational differences that we do see involve turns centered in the 20–30 region. The probability of finding turns centered in the 25–30 region, where there is a loop in Aβ fibrils, is found to increase upon dimerization and to correlate with experimentally measured rates of fibril formation for the different Aβ42 variants. Our findings hint at reorganization of this part of the molecule as a potentially critical step in Aβ aggregation.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/61692011-01-01T00:00:00ZNew Width Parameters of Graphs
http://hdl.handle.net/1956/6166
New Width Parameters of Graphs
Vatshelle, Martin
Doctoral thesis
<p>The main focus of this thesis is on using the divide and conquer technique to
efficiently solve graph problems that are in general intractable. We work in
the field of parameterized algorithms, using width parameters of graphs that
indicate the complexity inherent in the structure of the input graph. We use
the notion of branch decompositions of a set function introduced by Robertson
and Seymour to define three new graph parameters, boolean-width, maximum
matching-width (MM-width) and maximum induced matching-width
(MIM-width). We compare these new graph width parameters to existing
graph parameters by defining partial orders of width parameters. We focus
on tree-width, branch-width, clique-width, module-width and rank-width,
and include a Hasse diagram of these orders containing 32 graph parameters.</p><p>We use the size of a maximum matching in a bipartite graph as a set
function to define MM-width and show that MM-width never differs by more
than a multiplicative factor 3 from tree-width. The main reason for introducing
MM-width is that it simplifies the comparison between tree-width and
parameters defined via branch decomposition of a set function.</p><p>We use the logarithm of the number of maximal independent sets in a bipartite
graph as set function to define boolean-width. We show that booleanwidth
of a graph class is bounded if and only if rank-width is bounded, and
show that the boolean-width of a graph can be as low as the logarithm of the
rank-width of the graph. Given a decomposition of boolean-width k, we design
FPT algorithms parameterized by k, for a large class of graph problems,
whose runtime has a single exponential dependency in the boolean-width,
i.e. O∗(2O(k2)). Moreover we solve Maximum Independent Set in time
O∗(22k) and Minimum Dominating Set in time O∗(23k). These algorithms
are in particular interesting in conjunction with the fact that many graph
classes have boolean-width O(log(n)), e.g. interval graphs.</p><p>MIM-width is defined using the size of a maximum induced matching in a
bipartite graph as set function. The main reason to introduce MIM-width is
that its value is lower than any of the other parameters, in particular MIMwidth
is 1 on interval graphs, permutation graphs and convex graphs, and at most 2k on circular k-trapezoid graphs, k-polygon graphs, Dliworth k graphs
and complements of k-degenerate graphs. We show that the FPT algorithms
designed for boolean-width are XP algorithms when parameterized by MIMwidth,
this shows that a large class of locally checkable vertex subset and
vertex partitioning problems are polynomial time solvable on the mentioned
graph classess with bounded MIM-width.</p><p>We give exact algorithms to compute optimal decompositions for all the
three new width parameters and report on the implementation of a heuristic
for finding decompositions of low boolean-width.</p>
Mon, 03 Sep 2012 00:00:00 GMThttp://hdl.handle.net/1956/61662012-09-03T00:00:00ZOn the Privacy of Two Tag Ownership Transfer Protocols for RFIDs
http://hdl.handle.net/1956/6111
On the Privacy of Two Tag Ownership Transfer Protocols for RFIDs
Abyaneh, Mohammad Reza Sohizadeh
Conference object
In this paper, the privacy of two recent RFID
tag ownership transfer protocols are investigated against
the tag owners as adversaries.
The first protocol called ROTIV is a scheme which provides
a privacy-preserving ownership transfer by using an HMACbased
authentication with public key encryption. However,
our passive attack on this protocol shows that any legitimate
owner which has been the owner of a specific tag is able to
trace it either in the past or in the future. Tracing the tag is
also possible via an active attack for any adversary who is
able to tamper the tag and extract its information.
The second protocol called, Chen et al.’s protocol, is an
ownership transfer protocol for passive RFID tags which
conforms EPC Class1 Generation2 standard. Our attack on
this protocol shows that the previous owners of a particular
tag are able to trace it in future. Furthermore, they are able
even to obtain the tag’s secret information at any time in the
future which makes them capable of impersonating the tag.
IEEE International Conference for Internet
Technology and Secured Transactions (ICITST2011) in Abu Dhabi, UAE.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/61112012-01-01T00:00:00ZSecurity Analysis of two Distance-Bounding Protocols
http://hdl.handle.net/1956/6110
Security Analysis of two Distance-Bounding Protocols
Abyaneh, Mohammad Reza Sohizadeh
Conference object
In this paper, we analyze the security of two
recently proposed distance bounding protocols called the
“Hitomi” and the “NUS” protocols. Our results show that the
claimed security of both protocols has been overestimated.
Namely, we show that the Hitomi protocol is susceptible to
a full secret key disclosure attack which not only results in
violating the privacy of the protocol but also can be exploited
for further attacks such as impersonation, mafia fraud and
terrorist fraud attacks. Our results also demonstrates that the
probability of success in a distance fraud attack against the
NUS protocol can be increased up to (34
)n and even slightly
more, if the adversary is furnished with some computational
capabilities.
Workshop on RFID Security and Privacy (RFIDSec 2011)
in Amherst, Massachusetts, USA.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/61102012-01-01T00:00:00ZPassive Cryptanalysis of the UnConditionally Secure Authentication Protocol for RFID Systems
http://hdl.handle.net/1956/6109
Passive Cryptanalysis of the UnConditionally Secure Authentication Protocol for RFID Systems
Abyaneh, Mohammad Reza Sohizadeh
Conference object
Recently, Alomair et al. proposed the first Un-
Conditionally Secure mutual authentication protocol for lowcost
RFID systems(UCS-RFID). The security of the UCSRFID
relies on five dynamic secret keys which are updated
at every protocol run using a fresh random number (nonce)
secretly transmitted from a reader to tags.
Our results show that, at the highest security level of the
protocol (security parameter= 256), inferring a nonce is feasible
with the probability of 0.99 by eavesdropping(observing)
about 90 runs of the protocol. Finding a nonce enables a
passive attacker to recover all five secret keys of the protocol.
To do so, we propose a three-phase probabilistic approach
in this paper. Our attack recovers the secret keys with a
probability that increases by accessing more protocol runs.
We also show that tracing a tag using this protocol is also
possible even with less runs of the protocol.
International
Conference on Information Security and Cryptology (ICISC 2010) in Seoul, Korea.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/61092012-01-01T00:00:00ZOn the Security of Non-Linear HB (NLHB) Protocol Against Passive Attack
http://hdl.handle.net/1956/6108
On the Security of Non-Linear HB (NLHB) Protocol Against Passive Attack
Abyaneh, Mohammad Reza Sohizadeh
Conference object
As a variant of the HB authentication protocol
for RFID systems, which relies on the complexity of decoding
linear codes against passive attacks, Madhavan et
al. presented Non-Linear HB(NLHB) protocol. In contrast
to HB, NLHB relies on the complexity of decoding a class
of non-linear codes to render the passive attacks proposed
against HB ineffective. Based on the fact that there has been
no passive solution for the problem of decoding a random
non-linear code, the authors have claimed that NLHB’s security
margin is very close to its key size.
In this paper, we show that passive attacks against HB
protocol can still be applicable to NLHB and this protocol
does not provide the desired security margin. In our attack,
we first linearize the non-linear part of NLHB to obtain a
HB equivalent for NLHB, and then exploit the passive attack
techniques proposed for the HB to evaluate the security
margin of NLHB. The results show that although NLHB’s
security margin is relatively higher than HB against similar
passive attack techniques, it has been overestimated and, in
contrary to what is claimed, NLHB is vulnerable to passive
attacks against HB, especially when the noise vector in the
protocol has a low weight.
IEEE/IFIP International Conference
on Embedded and Ubiquitous Computing (EUC-TrustCom2010) in Hong Kong,
China
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/61082012-01-01T00:00:00ZColluding Tags Attack on the ECC-based Grouping Proofs for Rfids
http://hdl.handle.net/1956/6107
Colluding Tags Attack on the ECC-based Grouping Proofs for Rfids
Abyaneh, Mohammad Reza Sohizadeh
Conference object
Recently, a new privacy-preserving elliptic curve
based grouping proof protocol with colluding tag prevention(
CTP) has been proposed. The CTP protocol is claimed
to be resistant against colluding tags attacks in which the involved
tags can exchange some messages via another reader
before the protocol starts without revealing their private
keys.
In this paper, we show that the CTP protocol is vulnerable
to some colluding tag attacking scenario. In addition,
we propose a new elliptic curve based grouping protocol
which can fix the problem. Our proposal is based on a formally
proved privacy preserving authentication protocol
and has the advantage of being resistant against colluding
tags attacks with the same amount of computation.
International Conference on Security and Cryptography (SECRYPT 2011) in Seville, Spain.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/61072012-01-01T00:00:00ZSecurity Analysis Of Lightweight Schemes for RFID Systems
http://hdl.handle.net/1956/6106
Security Analysis Of Lightweight Schemes for RFID Systems
Abyaneh, Mohammad Reza Sohizadeh
Doctoral thesis
<p>This thesis mainly examines the security analysis of lightweight protocols
proposed for providing security and privacy for RFID systems.
To achieve this goal, first we give a brief introduction of RFID systems.
The introduction includes: the history, system components, applications,
standards and related issues of RFID systems. The main issues
which are highlighted in the thesis are security and privacy. One possible
solution to provide RFID systems with privacy and security is using
cryptography. But conventional cryptography is too big for the highly
constrained devices such as RFIDs. The alternative solution is using
lightweight cryptography which aims at squeezing the cryptographic
schemes into the RFID tags. A brief overview of the thesis is illustrated
in Figure 1.</p>
<p>This thesis consists of a categorization of the lightweight proposals
and related works in the literature. Finally, we try to explain how the
security of a lightweight scheme can be analyzed and evaluated. To do
so, the security requirements, adversarial models and potential attacks
for lightweight schemes are presented. In this part, we mainly focus on
the security analysis of the lightweight protocols because the security
analysis of the lightweight primitives and algorithms is more or less the
same as conventional primitives and has already been widely discussed
in the literature.</p>
Wed, 08 Aug 2012 00:00:00 GMThttp://hdl.handle.net/1956/61062012-08-08T00:00:00ZModelling migration patterns of fish using depth and temperature preferences
http://hdl.handle.net/1956/6081
Modelling migration patterns of fish using depth and temperature preferences
Natvig, Erik
Master thesis
Time series of depth and temperature derived from electronic tagging of fish have been used to construct a stochastic model that aims at capturing main characteristics of the observations. Mixed Ornstein-Uhlenbeck process models are used to model attraction towards different concentration points in the depth/temperature plane, and a methodology to determine model parameters is presented. Simulations of the model displays very similar dynamics to the original data. Further, an optimization problem for finding a path expressing the geographical location of the tagged fish is formulated. An interpolation procedure using thin-plate splines for interpolating an atlas over temperature in the ocean is introduced. As general-purpose optimization solvers fail to find optimal solutions to the problem, a special-purpose algorithm, based on an ensemble search, is developed. The algorithm solves the problem to optimality, both for test instances and for real data, but demonstrates that there may be many radically different paths through the ocean that match the temperature and depth time series. The algorithm has a potential of making good estimates on the geolocation of fish provided external information is used to guide the algorithm and to select the most likely solutions.
Fri, 27 Apr 2012 00:00:00 GMThttp://hdl.handle.net/1956/60812012-04-27T00:00:00ZGeneralized Bent and/or Negabent Constructions
http://hdl.handle.net/1956/6080
Generalized Bent and/or Negabent Constructions
Ådlandsvik, Yngve
Master thesis
In this thesis, we generalize the Maiorana-McFarland construction for bent, negabent and bent-negabent Boolean functions and describe a way to computationally search for constructions using these generalizations. We present some of these constructions and their properties.
Fri, 27 Apr 2012 00:00:00 GMThttp://hdl.handle.net/1956/60802012-04-27T00:00:00ZStøtte for Geodata i Dynamic Presentation Generator
http://hdl.handle.net/1956/6079
Støtte for Geodata i Dynamic Presentation Generator
Waage, Aleksander Vatle
Master thesis
Fri, 27 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/60792012-01-27T00:00:00ZSecurity API for Java ME: secureXdata
http://hdl.handle.net/1956/6078
Security API for Java ME: secureXdata
Valvik, Remi André Bognøy
Master thesis
The usage of mobile phones, PDAs and other mobile communication devices in the context of health is an emerging part of eHealth. In 2010 the American National Institutes of Health defined mHealth as "The delivery of healthcare services via mobile communication devices". While the idea of using mobile devices as part of healthcare is not new, a recent increase in the mobile phone penetration rate in low-income countries makes mHealth a cost-effective way of providing better healthcare in areas of the world where this is much needed.
This thesis focuses on security aspects of mobile data collection systems (MDCS), which are specialized mHealth systems that use mobile devices to collect data about the condition and trends of a country's health status, generally by filling out forms on a mobile device.
Even though there are a number of MDCS in use today, most of these fail to systematically address the security and privacy concerns while handling private or personal information such as medical records.
Building on existing work done by the mHealth Security Group at Department of Informatics, University in Bergen, the candidate in this thesis has extended an existing prototype implementation of a secure protocol into a comprehensive security API. The goal being to allow easy securing of new and existing Java ME based clients used by MDCS.
Fri, 24 Feb 2012 00:00:00 GMThttp://hdl.handle.net/1956/60782012-02-24T00:00:00ZComputation of Treespan. A Generalization of Bandwidth to Treelike Structures
http://hdl.handle.net/1956/6065
Computation of Treespan. A Generalization of Bandwidth to Treelike Structures
Dregi, Markus Sortland
Master thesis
Motivated by a search game, Fomin, Heggernes and Telle [Algorithmica, 2005]
defined the graph parameter treespan, a generalization of the well studied
parameter bandwidth. Treespan is the maximum number of appearances of
a vertex in an ordered tree decomposition, i.e. a tree decomposition intro-
ducing at most one new vertex in each bag. In this thesis, we investigate the
computational tractability of the problem Treespan, which aims to decide
whether the treespan of a given graph is at most a given integer k. First we
introduce a new perspective to the problem, with an equivalent parameter
which we call adjacencyspan. It provides, in our opinion, a clearer under-
standing of the nature of the problem.
We provide structural results related to adjacencyspan, and combine these
with dynamic programming to solve Treespan in polynomial time for fixed
values of k and hence prove the problem to be in XP. Fomin et al. [Al-
gorithmica, 2005] asked whether Treespan is polynomial time solvable for
trees of degree higher than 3 as their final open problem. We solve this
problem by proving Treespan to be polynomial time solvable for trees of
bounded maximum degree d, for every fixed d. In the area of fixed parame-
ter tractability we give a polynomial kernel for Treespan parameterized by
both the required treespan and the vertex cover number of the input graph.
It is a classical result, first proven by Lenstra [Mathematics of Operations
Research, 1983], that p-Integer Linear Programming Feasibility is
fixed parameter tractable. In his book Invitation to Fixed-Parameter Al-
gorithms", Niedermeier specifically asks for more applications of this result.
In this thesis we provide another application by using it to obtain a fixed
parameter tractable algorithm for Treespan parameterized by the vertex
cover number.
The thesis do not only have theoretical implications, but we give algorithms
that by far outperform previously known algorithms in practical terms.
Thu, 14 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/60652012-06-14T00:00:00ZOptimization Models for Turbine Location in Wind Farms
http://hdl.handle.net/1956/5882
Optimization Models for Turbine Location in Wind Farms
Haugland, Jan Kristian
Master thesis
The topic of this thesis is wind farm optimization. The goal is to be able to decide where to install wind turbines within a given region in order to maximize the power output in two different scenarios: For a fixed number of turbines with free placement, and for a limited number of possible locations and a variable number of turbines with a fixed cost associated with the installation of each turbine that is subtracted from the power output. These are referred to as "Problem 1" and "Problem 2". First, we develop a new, simple wake model. It is based on a model that was described by Katic et. al. in 1986, and then we make some improvements based on the authors' own comments and data. This is then further developed into complete mathematical models for Problem 1 and Problem 2. We consider a few heuristic methods for carrying out the optimization, including one that we have not seen in the literature. These are tried out on simple test cases. Finally, we try out exact optimization on two simple cases, and try out the heuristic methods on a larger sample of cases. We conclude that the new wake model, combined with the method that displays the best performance in the experiments, appears to be a useful tool in designing wind farms.
Thu, 24 May 2012 00:00:00 GMThttp://hdl.handle.net/1956/58822012-05-24T00:00:00ZFiltering of FTLE for Visualizing Spatial Separation in Unsteady 3D Flow
http://hdl.handle.net/1956/5860
Filtering of FTLE for Visualizing Spatial Separation in Unsteady 3D Flow
Pobitzer, Armin; Peikert, Ronald; Fuchs, Raphael; Theisel, Holger; Hauser, Helwig
Peer reviewed; Journal article
In many cases, feature detection for flow visualization is structured
in two phases: first candidate identification, and then filtering.
With this paper, we propose to use the directional information contained
in the finite-time Lyapunov exponents (FTLE) computation,
in order to filter the FTLE field. In this way we focus on those
separation structures that delineate flow compartments which develop
into different spatial locations, as compared to those that
separate parallel flows of different speed. We provide a discussion
of the underlying theory and our related considerations. We derive
a new filtering scheme and demonstrate its effect in the context of
several selected fluid flow cases, especially in comparison with unfiltered
FTLE visualization. Since previous work has provided insight
with respect to the studied flow patterns, we are able to provide a discussion of the resulting visible separation structures.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/58602012-01-01T00:00:00ZEnergy-scale Aware Feature Extraction for Flow Visualization
http://hdl.handle.net/1956/5859
Energy-scale Aware Feature Extraction for Flow Visualization
Pobitzer, Armin; Tutkun, Murat; Anreassen, Øyvind; Fuchs, Raphael; Peikert, Ronald; Hauser, Helwig
Peer reviewed; Journal article
In the visualization of flow simulation data, feature detectors often
tend to result in overly rich response, making some sort of filtering
or simplification necessary to convey meaningful images. In this
paper we present an approach that builds upon a decomposition of
the flow field according to dynamical importance of different scales
of motion energy. Focusing on the high-energy scales leads to a
reduction of the flow field while retaining the underlying physical
process. The presented method acknowledges the intrinsic structures
of the flow according to its energy and therefore allows to
focus on the energetically most interesting aspects of the flow. Our
analysis shows that this approach can be used for methods based
on both local feature extraction and particle integration and we provide a discussion of the error caused by the approximation. Finally,
we illustrate the use of the proposed approach for both a local
and a global feature detector and in the context of numerical flow
simulations.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/58592011-01-01T00:00:00ZA Statistics-based Dimension Reduction of the Space of Path Line Attributes for Interactive Visual Flow Analysis
http://hdl.handle.net/1956/5858
A Statistics-based Dimension Reduction of the Space of Path Line Attributes for Interactive Visual Flow Analysis
Pobitzer, Armin; Lež, Alan; Matković, Krešimir; Hauser, Helwig
Peer reviewed; Journal article
Recent work has shown the great potential of interactive flow
analysis by the analysis of path lines. The choice of suitable
attributes, describing the path lines, is, however, still an open question.
This paper addresses this question performing a statistical
analysis of the path line attribute space. In this way we are able
to balance the usage of computing power and storage with the necessity
to not loose relevant information. We demonstrate how a
carefully chosen attribute set can improve the benefits of state-ofthe
art interactive flow analysis. The results obtained are compared
to previously published work.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/58582012-01-01T00:00:00ZThe State of the Art in Topology-based Visualization of Unsteady Flow
http://hdl.handle.net/1956/5857
The State of the Art in Topology-based Visualization of Unsteady Flow
Pobitzer, Armin; Peikert, Ronald; Fuchs, Raphael; Schindler, Benjamin; Kuhn, Alexander; Theisel, Holger; Matković, Krešimir; Hauser, Helwig
Peer reviewed; Journal article
Vector fields are a common concept for the representation of
many different kinds of flow phenomena in science and engineering.
Methods based on vector field topology are known for
their convenience for visualizing and analyzing steady flows, but
a counterpart for unsteady flows is still missing. However, a lot
of good and relevant work aiming at such a solution is available.
We give an overview of previous research leading towards topologybased
and topology-inspired visualization of unsteady flow, pointing
out the different approaches and methodologies involved as well
as their relation to each other, taking classical (i.e., steady) vector
field topology as our starting point. Particularly, we focus on
Lagrangian methods, space-time domain approaches, local methods,
and stochastic and multi-field approaches. Furthermore, we
illustrate our review with practical examples for the different approaches.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/58572011-01-01T00:00:00ZInteractive Visual Analysis of Time-dependent Flows: Physics- and Statistics-based Semantics
http://hdl.handle.net/1956/5856
Interactive Visual Analysis of Time-dependent Flows: Physics- and Statistics-based Semantics
Pobitzer, Armin
Doctoral thesis
With the increasing use of numerical simulations in the fluid mechanics
community in recent years flow visualization increasingly gains importance
as an advanced analysis tool for the simulation output. Up to now, flow
visualization has mainly focused on the extraction and visualization of structures
that are defined by their semantic meaning. Examples for such structures
are vortices or separation structures between different groups of particles that
travel together.
In order to deepen our understanding of structures linked to certain flow phenomena,
e.g., how and why they appear, evolve, and finally are destroyed, also
linking structures to semantic meaning that is not attributed to them by their
very definition, is a highly promising research direction to pursue.
In this thesis we provide several approaches on how to augment structures
stemming from classical flow visualization techniques by additional semantic
information originating from new methods based on physics and statistics. In
particular, we target separation structures, the linking of structures with a local
semantics to global flow phenomena, and minimal representation of particle
dynamics in the context of path line attributes.
Fri, 22 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/58562012-06-22T00:00:00ZSolving the pooling problem with LMI relaxations
http://hdl.handle.net/1956/5853
Solving the pooling problem with LMI relaxations
Frimannslund, Lennart; El Ghami, Mohamed; Alfaki, Mohammed; Haugland, Dag
Peer reviewed; Chapter
We consider the standard pooling problem with a single quality parameter,
which is a polynomial global optimization problem occurring among other
places in the oil industry. In this paper, we show that if the feasible set has a
nonempty interior, the problem can be solved by a hierarchy of semidefinite
relaxations in which the resulting sequences of their optimal values converge to
the global optimum. For a fixed relaxation order, this technique provides tight
lower bounds for the global objective function value. Based on the experiments,
for low order relaxations, the lower bound provided by this method matches the
true global optimum in several instances.
A short version of this paper is published in: S. Cafieri, B. G.-Tóth, E. Hendrix,
L. Liberti and F. Messine (Eds.), Proceedings of the Toulouse Global Optimization
Workshop (pp. 51–54), 2010.
Sun, 01 Jan 2012 00:00:00 GMThttp://hdl.handle.net/1956/58532012-01-01T00:00:00ZComparison of Discrete and Continuous Models for the Pooling Problem
http://hdl.handle.net/1956/5848
Comparison of Discrete and Continuous Models for the Pooling Problem
Alfaki, Mohammed; Haugland, Dag
Chapter; Peer reviewed
The pooling problem is an important global optimization problem which is encountered in many industrial settings. It is traditionally modeled as a bilinear, nonconvex optimization problem, and
solved by branch-and-bound algorithms where the subproblems are convex. In some industrial
applications, for instance in pipeline transportation of natural gas, a different modeling approach
is often made. Rather than defining it as a bilinear problem, the range of qualities is discretized,
and the complicating constraints are replaced by linear ones involving integer variables. Consequently,
the pooling problem is approximated by a mixed-integer programming problem. With
a coarse discretization, this approach represents a saving in computational effort, but may also
lead to less accurate modeling. Justified guidelines for choosing between a bilinear and a discrete
model seem to be scarce in the pooling problem literature. In the present work, we study
discretized versions of models that have been proved to work well when formulated as bilinear
programs. Through extensive numerical experiments, we compare the discrete models to their
continuous ancestors. In particular, we study how the level of discretization must be chosen if a
discrete model is going to be competitive in both running time and accuracy.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/58482011-01-01T00:00:00ZModels and Solution Methods for the Pooling Problem
http://hdl.handle.net/1956/5847
Models and Solution Methods for the Pooling Problem
Alfaki, Mohammed
Doctoral thesis
Pipeline transportation of natural gas is largely affected by restrictions regarding
gas quality imposed by the market and the actual quality of the gas produced at
sources. From the sources, gas flow streams of unequal compositions are mixed
in intermediate tanks (pools) and blended again in terminal points. At the pools
and the terminals, the quality of the mixture is given as volume-weighted average
of the qualities of each mixed gas flow stream. The optimization problem of
allocating flow in pipeline transportation networks at minimum cost is referred
to as the pooling problem. Such problem is frequently encountered not only in gas
transportation planning, but also in the process industries such as petrochemicals.
The pooling problem is a well-studied global optimization problem, which is
formulated as a nonconvex (bilinear) problem, and consequently the problem can
possibly have many local optima. Despite the strong NP-hardness of the problem,
which is proved formally in this thesis, much progress in solving small to
moderate size instances to global optimality has recently been made by use of
strong formulations. However, the literature offers few approaches to approximation
algorithms and other inexact methods dedicated for large-scale instances.
The main contribution of this thesis is the development of strong formulations
and efficient solution methods for the pooling problem. In this thesis, we develop
a new formulation that proves to be stronger than other formulations based on
proportion variables for the standard pooling problem. For the generalized case,
we proposes a multi-commodity flow formulation, and prove its strength over
formulations from the literature.
Regarding the solution methods, the thesis proposes three solution approaches
to tackle the problem. In the first methodology, we discuss solving a simplified version of the standard pooling problem using a solution strategy that based
on a sequence of semidefinite programming relaxations. The second approach is
based on discretization method in which the pooling problem is approximated
by a mixed-integer programming problem. Finally, we give a greedy construction
method especially designed to find good feasible solutions for large-scale
instances.
Mon, 18 Jun 2012 00:00:00 GMThttp://hdl.handle.net/1956/58472012-06-18T00:00:00ZOn a New Method for Derivative Free Optimization
http://hdl.handle.net/1956/5783
On a New Method for Derivative Free Optimization
Frimannslund, Lennart; Steihaug, Trond
Journal article
A new derivative-free optimization method for
unconstrained optimization of partially separable functions
is presented. Using average curvature information computed
from sampled function values the method generates an average
Hessian-like matrix and uses its eigenvectors as new search
directions. Numerical experiments demonstrate that this new
derivative free optimization method has the very desirable
property of avoiding saddle points. This is illustrated on two
test functions and compared to other well known derivative
free methods. Further, we compare the efficiency of the new
method with two classical derivative methods using a class of
testproblems.
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/57832011-01-01T00:00:00ZProofs, Types and Lambda Calculus - datasets
http://hdl.handle.net/1956/5657
Proofs, Types and Lambda Calculus - datasets
Polonsky, Andrew
Dataset
Datasets and source code for the doctoral thesis "Proofs, Types and Lambda Calculus" by Andrew Polonsky, defended 17.01.2011.
Tue, 28 Feb 2012 00:00:00 GMThttp://hdl.handle.net/1956/56572012-02-28T00:00:00ZModulation of Transcriptional and Inflammatory Responses in Murine Macrophages by the Mycobacterium tuberculosis Mammalian Cell Entry (Mce) 1 Complex
http://hdl.handle.net/1956/5639
Modulation of Transcriptional and Inflammatory Responses in Murine Macrophages by the Mycobacterium tuberculosis Mammalian Cell Entry (Mce) 1 Complex
Stavrum, Ruth; Stavrum, Anne-Kristin; Valvatne, Håvard; Riley, Lee W.; Ulvestad, Elling; Jonassen, Inge; Assmus, Jørg; Doherty, Tanya Mark; Grewal, Harleen M. S.
Peer reviewed; Journal article
The outcome of many infections depends on the initial interactions between agent and host. Aiming at elucidating the
effect of the M. tuberculosis Mce1 protein complex on host transcriptional and immunological responses to infection with M.
tuberculosis, RNA from murine macrophages at 15, 30, 60 min, 4 and 10 hrs post-infection with M. tuberculosis H37Rv or Dmce1
H37Rv was analyzed by whole-genome microarrays and RT-QPCR. Immunological responses were measured using a
23-plex cytokine assay. Compared to uninfected controls, 524 versus 64 genes were up-regulated by 15 min post H37Rvand
D-mce1 H37Rv-infection, respectively. By 15 min post-H37Rv infection, a decline of 17 cytokines combined with upregulation
of Ccl24 (26.5-fold), Clec4a2 (23.2-fold) and Pparc (10.5-fold) indicated an anti-inflammatory response initiated by
IL-13. Down-regulation of Il13ra1 combined with up-regulation of Il12b (30.2-fold), suggested switch to a pro-inflammatory
response by 4 hrs post H37Rv-infection. Whereas no significant change in cytokine concentration or transcription was
observed during the first hour post D-mce1 H37Rv-infection, a significant decline of IL-1b, IL-9, IL-13, Eotaxin and GM-CSF
combined with increased transcription of Il12b (25.1-fold) and Inb1 (17.9-fold) by 4 hrs, indicated a pro-inflammatory
response. The balance between pro-and anti-inflammatory responses during the early stages of infection may have
significant bearing on outcome.
Mon, 24 Oct 2011 00:00:00 GMThttp://hdl.handle.net/1956/56392011-10-24T00:00:00ZConserved BK Channel-Protein Interactions Reveal Signals Relevant to Cell Death and Survival
http://hdl.handle.net/1956/5637
Conserved BK Channel-Protein Interactions Reveal Signals Relevant to Cell Death and Survival
Sokolowski, Bernd; Orchard, Sandra; Harvey, Margaret; Sridhar, Settu; Sakai, Yoshihisa
Peer reviewed; Journal article
The large-conductance Ca2+-activated K+ (BK) channel and its b-subunit underlie tuning in non-mammalian sensory or hair
cells, whereas in mammals its function is less clear. To gain insights into species differences and to reveal putative BK
functions, we undertook a systems analysis of BK and BK-Associated Proteins (BKAPS) in the chicken cochlea and compared
these results to other species. We identified 110 putative partners from cytoplasmic and membrane/cytoskeletal fractions,
using a combination of coimmunoprecipitation, 2-D gel, and LC-MS/MS. Partners included 14-3-3c, valosin-containing
protein (VCP), stathmin (STMN), cortactin (CTTN), and prohibitin (PHB), of which 16 partners were verified by reciprocal
coimmunoprecipitation. Bioinformatics revealed binary partners, the resultant interactome, subcellular localization, and
cellular processes. The interactome contained 193 proteins involved in 190 binary interactions in subcellular compartments
such as the ER, mitochondria, and nucleus. Comparisons with mice showed shared hub proteins that included N-methyl-Daspartate
receptor (NMDAR) and ATP-synthase. Ortholog analyses across six species revealed conserved interactions
involving apoptosis, Ca2+ binding, and trafficking, in chicks, mice, and humans. Functional studies using recombinant BK
and RNAi in a heterologous expression system revealed that proteins important to cell death/survival, such as annexinA5, cactin,
lamin, superoxide dismutase, and VCP, caused a decrease in BK expression. This revelation led to an examination of
specific kinases and their effectors relevant to cell viability. Sequence analyses of the BK C-terminus across 10 species
showed putative binding sites for 14-3-3, RAC-a serine/threonine-protein kinase 1 (Akt), glycogen synthase kinase-3b
(GSK3b) and phosphoinositide-dependent kinase-1 (PDK1). Knockdown of 14-3-3 and Akt caused an increase in BK
expression, whereas silencing of GSK3b and PDK1 had the opposite effect. This comparative systems approach suggests
conservation in BK function across different species in addition to novel functions that may include the initiation of signals
relevant to cell death/survival.
Fri, 09 Dec 2011 00:00:00 GMThttp://hdl.handle.net/1956/56372011-12-09T00:00:00ZBinding Leverage as a Molecular Basis for Allosteric Regulation
http://hdl.handle.net/1956/5617
Binding Leverage as a Molecular Basis for Allosteric Regulation
Mitternacht, Simon; Berezovsky, Igor N.
Peer reviewed; Journal article
Allosteric regulation involves conformational transitions or fluctuations between a few closely related states, caused by the
binding of effector molecules. We introduce a quantity called binding leverage that measures the ability of a binding site to
couple to the intrinsic motions of a protein. We use Monte Carlo simulations to generate potential binding sites and either
normal modes or pairs of crystal structures to describe relevant motions. We analyze single catalytic domains and
multimeric allosteric enzymes with complex regulation. For the majority of the analyzed proteins, we find that both catalytic
and allosteric sites have high binding leverage. Furthermore, our analysis of the catabolite activator protein, which is
allosteric without conformational change, shows that its regulation involves other types of motion than those modulated at
sites with high binding leverage. Our results point to the importance of incorporating dynamic information when predicting
functional sites. Because it is possible to calculate binding leverage from a single crystal structure it can be used for
characterizing proteins of unknown function and predicting latent allosteric sites in any protein, with implications for drug
design.
Allosteric protein regulation is the mechanism by which binding of a molecule to one site in a protein affects the activity at another site. Although the two classical phenomenological models, Monod-Wyman-Changeux (MWC) and Koshland-Némethy-Filmer (KNF), span from the case of hemoglobin to membrane receptors, they do not describe the intramolecular interactions involved. The coupling between two allosterically connected sites commonly takes place through coherent collective motion involving the whole protein. We therefore introduce a quantity called binding leverage to measure the strength of the coupling between particular binding sites and such motions. We show that high binding leverage is a characteristic of both allosteric sites and catalytic sites, emphasizing that both enzymatic function and allosteric regulation require a coupling between ligand binding and protein dynamics. We also consider the first known case of purely entropic allostery, where ligand binding only affects the amplitudes of fluctuations. We find that the binding site in this protein does not primarily connect to collective motions – instead the modulation of fluctuations is controlled from a deeply buried and highly connected site. Finally, sites with high binding leverage but no known biological function could be latent allosteric sites, and thus drug targets.
Thu, 15 Sep 2011 00:00:00 GMThttp://hdl.handle.net/1956/56172011-09-15T00:00:00ZCoherent Conformational Degrees of Freedom as a Structural Basis for Allosteric Communication
http://hdl.handle.net/1956/5616
Coherent Conformational Degrees of Freedom as a Structural Basis for Allosteric Communication
Mitternacht, Simon; Berezovsky, Igor N.
Peer reviewed; Journal article
Conformational changes in allosteric regulation can to a large extent be described as motion along one or a few coherent
degrees of freedom. The states involved are inherent to the protein, in the sense that they are visited by the protein also in
the absence of effector ligands. Previously, we developed the measure binding leverage to find sites where ligand binding
can shift the conformational equilibrium of a protein. Binding leverage is calculated for a set of motion vectors representing
independent conformational degrees of freedom. In this paper, to analyze allosteric communication between binding sites,
we introduce the concept of leverage coupling, based on the assumption that only pairs of sites that couple to the same
conformational degrees of freedom can be allosterically connected. We demonstrate how leverage coupling can be used to
analyze allosteric communication in a range of enzymes (regulated by both ligand binding and post-translational
modifications) and huge molecular machines such as chaperones. Leverage coupling can be calculated for any protein
structure to analyze both biological and latent catalytic and regulatory sites.
What are the molecular mechanisms of allosteric communication in proteins? We base our analysis on the hypothesis that a folded protein has a number of conformational degrees of freedom, which describe fluctuations around the native conformation and switching from/to functional states. Transitions between the protein states involved in function and its regulation are based on coherent conformational degrees of freedom. Motion of one part of a protein along such a degree of freedom, implies a correlated motion in other parts of the protein. By determining which binding sites are simultaneously affected by the same motion we find sites that are allosterically coupled, i.e. where binding at one site can cause a change in ligand-affinity at another. Leverage coupling, the quantity introduced to measure this type of connection, reflects allosteric communication between different binding sites. We show how it can be used to understand allostery in enzymes of different sizes as well as in large protein complexes such as chaperones. Analysis of leverage coupling provides guidance in targeting native and latent regulatory sites.
Thu, 08 Dec 2011 00:00:00 GMThttp://hdl.handle.net/1956/56162011-12-08T00:00:00ZStructure of Polynomial-Time Approximation
http://hdl.handle.net/1956/5602
Structure of Polynomial-Time Approximation
Leeuwen, Erik Jan van; Leeuwen, Jan van
Peer reviewed; Journal article
Approximation schemes are commonly classified as being either a polynomial-time approximation scheme (ptas) or a fully polynomial-time approximation scheme (fptas). To properly differentiate between approximation schemes for concrete problems, several subclasses have been identified: (optimum-)asymptotic schemes (ptas∞, fptas∞), efficient schemes (eptas), and size-asymptotic schemes. We explore the structure of these subclasses, their mutual relationships, and their connection to the classic approximation classes. We prove that several of the classes are in fact equivalent. Furthermore, we prove the equivalence of eptas to so-called convergent polynomial-time approximation schemes. The results are used to refine the hierarchy of polynomial-time approximation schemes considerably and demonstrate the central position of eptas among approximation schemes.
We also present two ways to bridge the hardness gap between asymptotic approximation schemes and classic approximation schemes. First, using notions from fixed-parameter complexity theory, we provide new characterizations of when problems have a ptas or fptas. Simultaneously, we prove that a large class of problems (including all MAX-SNP-complete problems) cannot have an optimum-asymptotic approximation scheme unless P=NP, thus strengthening results of Arora et al. (J. ACM 45(3):501–555, 1998). Secondly, we distinguish a new property exhibited by many optimization problems: pumpability. With this notion, we considerably generalize several problem-specific approaches to improve the effectiveness of approximation schemes with asymptotic behavior.
Fri, 14 Oct 2011 00:00:00 GMThttp://hdl.handle.net/1956/56022011-10-14T00:00:00ZDatainnsamling med XForms i Dynamic Presentation Generator
http://hdl.handle.net/1956/5580
Datainnsamling med XForms i Dynamic Presentation Generator
Høiland, Morten
Master thesis
Wed, 01 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/55802011-06-01T00:00:00ZInteractive Visual Analysis of Process Data
http://hdl.handle.net/1956/5302
Interactive Visual Analysis of Process Data
Lampe, Ove Daae
Doctoral thesis
Data gathered from processes, or process data, contains many different aspects
that a visualization system should also convey. Aspects such as, temporal
coherence, spatial connectivity, streaming data, and the need for in-situ
visualizations, which all come with their independent challenges. Additionally,
as sensors get more affordable, and the benefits of measurements get clearer we
are faced with a deluge of data, of which sizes are rapidly growing. With all
the aspects that should be supported and the vast increase in the amount of
data, the traditional techniques of dashboards showing the recent data becomes
insufficient for practical use. In this thesis we investigate how to extend the traditional
process visualization techniques by bringing the streaming process data
into an interactive visual analysis setting. The augmentation of process visualization
with interactivity enables the users to go beyond the mere observation,
pose questions about observed phenomena and delve into the data to mine for
the answers. Furthermore, this thesis investigates how to utilize frequency based,
as opposed to item based, techniques to show such large amounts of data. By
utilizing Kernel Density Estimates (KDE) we show how the display of streaming
data benefit by the non-parametric automatic aggregation to interpret incoming
data put in context to historic data.
Wed, 30 Nov 2011 00:00:00 GMThttp://hdl.handle.net/1956/53022011-11-30T00:00:00ZCurve-Centric Volume Reformation for Comparative Visualization
http://hdl.handle.net/1956/5299
Curve-Centric Volume Reformation for Comparative Visualization
Lampe, Ove Daae; Correa, Carlos; Ma, Kwan-Liu; Hauser, Helwig
Peer reviewed; Journal article
We present two visualization techniques for curve-centric volume
reformation with the aim to create compelling comparative visualizations. A curve-centric volume reformation deforms a volume,
with regards to a curve in space, to create a new space in which the
curve evaluates to zero in two dimensions and spans its arc-length
in the third. The volume surrounding the curve is deformed such
that spatial neighborhood to the curve is preserved. The result of the
curve-centric reformation produces images where one axis is aligned
to arc-length, and thus allows researchers and practitioners to apply
their arc-length parameterized data visualizations in parallel for
comparison. Furthermore we show that when visualizing dense data,
our technique provides an inside out projection, from the curve and
out into the volume, which allows for inspection what is around the
curve. Finally we demonstrate the usefulness of our techniques in the
context of two application cases. We show that existing data visualizations
of arc-length parameterized data can be enhanced by using
our techniques, in addition to creating a new view and perspective
on volumetric data around curves. Additionally we show how volumetric
data can be brought into plotting environments that allow
precise readouts. In the first case we inspect streamlines in a flow
field around a car, and in the second we inspect seismic volumes and
well logs from drilling.
Sun, 11 Oct 2009 00:00:00 GMThttp://hdl.handle.net/1956/52992009-10-11T00:00:00ZCurve Density Estimates
http://hdl.handle.net/1956/5297
Curve Density Estimates
Lampe, Ove Daae; Hauser, Helwig
Peer reviewed; Journal article
In this work, we present a technique based on kernel density estimation
for rendering smooth curves. With this approach, we produce
uncluttered and expressive pictures, revealing frequency information
about one, or, multiple curves, independent of the level of detail in the
data, the zoom level, and the screen resolution. With this technique
the visual representation scales seamlessly from an exact line drawing,
(for low-frequency/low-complexity curves) to a probability density
estimate for more intricate situations. This scale-independence
facilitates displays based on non-linear time, enabling high-resolution accuracy of recent values, accompanied by long historical series for
context. We demonstrate the functionality of this approach in the
context of prediction scenarios and in the context of streaming data.
Presented in Proceedings
of Eurographics/IEEE-VGTC Symp. on Visualization (EuroVis
2011) 1-3 June 2011, Bergen, Norway
Tue, 28 Jun 2011 00:00:00 GMThttp://hdl.handle.net/1956/52972011-06-28T00:00:00ZInteractive Visualization of Streaming Data with Kernel Density Estimation
http://hdl.handle.net/1956/5296
Interactive Visualization of Streaming Data with Kernel Density Estimation
Lampe, Ove Daae; Hauser, Helwig
Conference object
In this paper, we discuss the extension and integration of the statistical
concept of Kernel Density Estimation (KDE) in a scatterplotlike
visualization for dynamic data at interactive rates. We present a
line kernel for representing streaming data, we discuss how the concept
of KDE can be adapted to enable a continuous representation of
the distribution of a dependent variable of a 2D domain. We propose
to automatically adapt the kernel bandwith of KDE to the viewport
settings, in an interactive visualization environment that allows zooming and panning. We also present a GPU-based realization of
KDE that leads to interactive frame rates, even for comparably large
datasets. Finally, we demonstrate the usefulness of our approach in
the context of three application scenarios – one studying streaming
ship traffic data, another one from the oil & gas domain, where process
data from the operation of an oil rig is streaming in to an on-shore
operational center, and a third one studying commercial air traffic in
the US spanning 1987 to 2008.
Pacific Visualization Symposium, 1-4 March 2011, Hong Kong, China
Sat, 01 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/52962011-01-01T00:00:00ZDiagram Predicate Framework meets Model Versioning and Deep Metamodelling
http://hdl.handle.net/1956/5261
Diagram Predicate Framework meets Model Versioning and Deep Metamodelling
Rossini, Alessandro
Doctoral thesis
Model-driven engineering (MDE) is a branch of software engineering which aims at improving the productivity, quality and cost-effectiveness of software by shifting the paradigm from code-centric to model-centric. MDE promotes models and modelling languages as the main artefacts of the development process and model transformation as the primary technique to generate (parts of) software systems out of models. Models enable developers to reason at a higher level of abstraction, while model transformation restrains developers from repetitive and error-prone tasks such as coding. Although techniques and tools for MDE have advanced considerably during the last decade, several concepts and standards in MDE are still defined semi-formally, which may not guarantee the degree of precision required by MDE.
This thesis provides a formalisation of concepts in MDE based on the Diagram Predicate Framework (DPF), which was already under development before this work was initiated. DPF is a formal diagrammatic specification framework founded on category theory and graph transformation. In particular, the main contribution of this thesis is the consolidation of DPF and the formalisation of two novel techniques in MDE, namely model versioning and deep metamodelling. The content of this thesis is based on a sequence of publications resulting from the joint work with researchers from the University of Bergen, the Bergen University College and the Autonomous University of Madrid.
The work presented in this thesis is neither purely theoretical nor purely practical; it rather seeks to bridge the gap between these worlds. It provides a formal approach to model versioning and deep metamodelling motivated and illustrated by practical examples, while it introduces only the theoretical constructions which are necessary to investigate, formalise and solve these practical challenges.
Wed, 07 Dec 2011 00:00:00 GMThttp://hdl.handle.net/1956/52612011-12-07T00:00:00ZParallel Graph Algorithms for Combinatorial Scientific Computing
http://hdl.handle.net/1956/5118
Parallel Graph Algorithms for Combinatorial Scientific Computing
Patwary, Mostofa Ali
Doctoral thesis
Fri, 26 Aug 2011 00:00:00 GMThttp://hdl.handle.net/1956/51182011-08-26T00:00:00ZCryptanalysis of Cryptographic Primitives and Related Topics
http://hdl.handle.net/1956/5106
Cryptanalysis of Cryptographic Primitives and Related Topics
Hassanzadeh, Seyed Mehdi Mohammad
Doctoral thesis
This thesis has focused on the cryptanalysis of cryptographic primitives
especially stream ciphers which is an important topic in cryptography.
Additionally, the security of network coding is discussed and improved
with a new scheme.
First, a new statistical test, called Quadratic Box-Test, is presented. It
can be used to evaluate the randomness quality of the pseudorandom
sequences which can be the output of a cryptographic primitive. Moreover,
it can be used as a distinguisher to attack stream ciphers, block
ciphers and hash functions.
In the second part of the thesis, some stream ciphers are analyzed
and some successful attacks are presented. A modified algebraic attack
is used against some clock controlled stream ciphers. In order to have
successful attacks, the modified algebraic attack is accompanied by
some new ideas. Moreover, the security of clock controlled stream
ciphers based on its jumping system is investigated and discussed
which resulted in some recommendations to design a clock controlled
stream cipher. Finally, a differential distinguishing attack based on a
fault attack is presented in this thesis to attack the Shannon stream
cipher.
The last part of this thesis focuses on the security of network coding
which promises increased efficiency for future networks. For secure
network coding, a new attack model is studied and the secrecy capacity
is improved by a concatenated secret sharing scheme.
Fri, 09 Sep 2011 00:00:00 GMThttp://hdl.handle.net/1956/51062011-09-09T00:00:00ZInteractive visual analysis of multi-faceted scientific data
http://hdl.handle.net/1956/4820
Interactive visual analysis of multi-faceted scientific data
Kehrer, Johannes
Doctoral thesis
Visualization plays an important role in exploring, analyzing and presenting
large and heterogeneous scientific data that arise in many disciplines of
medicine, research, engineering, and others. We can see that model and data scenarios
are becoming increasingly multi-faceted: data are often multi-variate and
time-dependent, they stem from different data sources (multi-modal data), from
multiple simulation runs (multi-run data), or from multi-physics simulations of
interacting phenomena that consist of coupled simulation models (multi-model
data). The different data characteristics result in special challenges for visualization
research and interactive visual analysis. The data are usually large and
come on various types of grids with different resolution that need to be fused in
the visual analysis.
This thesis deals with different aspects of the interactive visual analysis of
multi-faceted scientific data. The main contributions of this thesis are: 1) a
number of novel approaches and strategies for the interactive visual analysis of
multi-run data; 2) a concept that enables the feature-based visual analysis across
an interface between interrelated parts of heterogeneous scientific data (including
data from multi-run and multi-physics simulations); 3) a model for visual analysis
that is based on the computation of traditional and robust estimates of statistical
moments from higher-dimensional multi-run data; 4) procedures for visual
exploration of time-dependent climate data that support the rapid generation
of promising hypotheses, which are subsequently evaluated with statistics; and
5) structured design guidelines for glyph-based 3D visualization of multi-variate
data together with a novel glyph. All these approaches are incorporated in a single
framework for interactive visual analysis that uses powerful concepts such as
coordinated multiple views, feature specification via brushing, and focus+context
visualization. Especially the data derivation mechanism of the framework has
proven to be very useful for analyzing different aspects of the data at different
stages of the visual analysis. The proposed concepts and methods are demonstrated
in a number of case studies that are based on multi-run climate data and
data from a multi-physics simulation.
Fri, 27 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/48202011-05-27T00:00:00ZSolving System of Nonlinear Equations Using Methods in the Halley Class
http://hdl.handle.net/1956/4815
Solving System of Nonlinear Equations Using Methods in the Halley Class
Suleiman, Sara Tagelsir Mohamed
Master thesis
In this thesis a new iterative frame work to solve the nonlinear system of equations $F(x)=0$ in n-dimensional
real space is established. This iterative frame work is based on a quadratic model of the function $F(x)$ at
the current point. The convergence analysis shows that this frame work has Q-third rate of convergence.
The main advantages of this frame work that the system of nonlinear equations simplified to quadratic
system of equations which hopefully has less computational complexity than the original system. It is shown
that the Halley class inherits it's convergence properties from the quadratic model.
In practice, for the large-scale problems, the inexact Halley class methods is used to solve $F(x)=0$, and has
Q-third rate of convergence.
Tue, 26 May 2009 00:00:00 GMThttp://hdl.handle.net/1956/48152009-05-26T00:00:00ZFormulas as programs
http://hdl.handle.net/1956/4772
Formulas as programs
Burrows, Eva
Master thesis
Alma-0 is a programming language supporting declarative programming,
which combines the advantages of imperative and logic programming
paradigms. This work explores declarative programming by extending the
interpretation of Alma-0 programs to recursive and/or non-recursive
procedures which exclude destructive assignments, and gives formal
proofs for the implication: Implementation --> Specification for some
non-trivial examples, e.g., finding a maximum element and various
sorting algorithms.
Author name on thesis: Eva Suci
Wed, 01 Jan 2003 00:00:00 GMThttp://hdl.handle.net/1956/47722003-01-01T00:00:00ZProgramming with Explicit Dependencies. A Framework for Portable Parallel Programming
http://hdl.handle.net/1956/4771
Programming with Explicit Dependencies. A Framework for Portable Parallel Programming
Burrows, Eva
Doctoral thesis
Computational devices are rapidly evolving into massively parallel
systems. Multicore processors are already standard; high performance
processors such as the Cell/BE processor, graphics processing units
(GPUs) featuring hundreds of on-chip processors, and reconfigurable
devices such as FPGAs are all developed to deliver high computing power.
They make parallelism commonplace, not only the privilege of expensive
high-end platforms. However, classical parallel programming paradigms
cannot readily exploit these highly parallel systems. In addition, each
hardware architecture comes along with a new programming model and/or
application programming interface (API). This makes the writing of
portable, efficient parallel code difficult. As the number of processors
per chip is expected to double every other year or so, entering parallel
processing into the mass market, software needs to be parallelized and
ported in an efficient way to massively parallel, possibly
heterogeneous, architectures. This work presents the foundations of a
high-level hardware independent parallel programming model based on
algebraic software methodologies. The model addresses two main issues of
parallel computing: how to map efficiently computations to different
parallel hardware architectures at a high and easy to manipulate level,
and how to do this at a low development cost, i.e., without rewriting
the problem solving code. The uniqueness of this framework is two-fold:
1.) it presents the user with a programmable interface especially
designed to allow the user to express the data dependency information of
the computation as real code, in terms of a data dependency algebra
(DDA). Hence data dependencies are made explicit in the program code.
The DDA interface consists of a generic point type, a generic branch
index type, and two sets of generic function declarations on these
types, requests and supplies, which are duals of each other. 2) it gives
direct access, within a unified framework, to various hardware
architectures’ communication layouts, or their APIs, at a high-level.
This allows the embedding of the computation to be fully controlled by
the programmer at a high and easy to manipulate level. In turn, this
saves the user from the hassle of learning “the dialect” of each
targeted hardware architecture in case. Direct access to aspects of the
hardware model is needed by some architectures, e.g., GPUs, FPGAs.
However, the model is fully portable and not tied to any specific
processor or hardware architecture, due to the modularisation of the
data dependencies. The inherent properties of DDAs lead to various
execution models, depending on the chosen hardware architecture, and
provide full control over the execution models’ computation time. Since
spatial placements of computations are controlled from DDAs, this gives
full control over space usage as well, whether sequential or parallel
execution is desired. In the parallel cases, DDAs give full control over
processor and memory allocation, and communication channel usage, while
still at the abstraction level of the source program.
Mon, 23 May 2011 00:00:00 GMThttp://hdl.handle.net/1956/47712011-05-23T00:00:00ZCCZ-equivalence of bent vectorial functions and related constructions
http://hdl.handle.net/1956/4557
CCZ-equivalence of bent vectorial functions and related constructions
Budaghyan, Lilya; Carlet, Claude
Peer reviewed; Journal article
We observe that the CCZ-equivalence of bent vectorial functions over F2nFn2 (n even) reduces to their EA-equivalence. Then we show that in spite of this fact, CCZ-equivalence can be used for constructing bent functions which are new up to EA-equivalence and therefore to CCZ-equivalence: applying CCZ-equivalence to a non-bent vectorial function F which has some bent components, we get a function F′ which also has some bent components and whose bent components are CCZ-inequivalent to the components of the original function F. Using this approach we construct classes of nonquadratic bent Boolean and bent vectorial functions.
Thu, 06 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/45572011-01-06T00:00:00ZA Note on Exact Algorithms for Vertex Ordering Problems on Graphs
http://hdl.handle.net/1956/4556
A Note on Exact Algorithms for Vertex Ordering Problems on Graphs
Bodlaender, Hans L.; Fomin, Fedor V.; Koster, Arie M.C.A.; Kratsch, Dieter; Thilikos, Dimitrios M.
Peer reviewed; Journal article
In this note, we give a proof that several vertex ordering problems can be solved in O ∗(2 n ) time and O ∗(2 n ) space, or in O ∗(4 n ) time and polynomial space. The algorithms generalize algorithms for the Travelling Salesman Problem by Held and Karp (J. Soc. Ind. Appl. Math. 10:196–210, 1962) and Gurevich and Shelah (SIAM J. Comput. 16:486–502, 1987). We survey a number of vertex ordering problems to which the results apply.
Fri, 21 Jan 2011 00:00:00 GMThttp://hdl.handle.net/1956/45562011-01-21T00:00:00ZDirected graph representation of half-rate additive codes over GF(4)
http://hdl.handle.net/1956/4550
Directed graph representation of half-rate additive codes over GF(4)
Danielsen, Lars Eirik; Parker, Matthew G.
Peer reviewed; Journal article
We show that (n, 2n) additive codes over GF(4) can be represented as directed
graphs. This generalizes earlier results on self-dual additive codes over GF(4), which correspond
to undirected graphs. Graph representation reduces the complexity of code classification,
and enables us to classify additive (n, 2n) codes over GF(4) of length up to 7. From
this we also derive classifications of isodual and formally self-dual codes.We introduce new
constructions of circulant and bordered circulant directed graph codes, and show that these
codes will always be isodual. A computer search of all such codes of length up to 26 reveals
that these constructions produce many codes of high minimum distance. In particular, we
find new near-extremal formally self-dual codes of length 11 and 13, and isodual codes of
length 24, 25, and 26 with better minimum distance than the best known self-dual codes.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/45502010-01-01T00:00:00ZSparse Boolean equations and circuit lattices
http://hdl.handle.net/1956/4531
Sparse Boolean equations and circuit lattices
Semaev, Igor
Peer reviewed; Journal article
A system of Boolean equations is called sparse if each equation depends on a small number of variables. Finding efficiently solutions to the system is an underlying hard problem in the cryptanalysis of modern ciphers. In this paper we study new properties of the Agreeing Algorithm, which was earlier designed to solve such equations. Then we show that mathematical description of the Algorithm is translated straight into the language of electric wires and switches. Applications to the DES and the Triple DES are discussed. The new approach, at least theoretically, allows a faster key-rejecting in brute-force than with COPACOBANA.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/45312010-01-01T00:00:00ZLower bounds on the size of spheres of permutations under the Chebychev distance
http://hdl.handle.net/1956/4493
Lower bounds on the size of spheres of permutations under the Chebychev distance
Kløve, Torleiv
Peer reviewed; Journal article
Lower bounds on the number of permutations p of {1, 2, . . . , n} satisfying
|pi − i| ≤ d for all i are given
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/44932010-01-01T00:00:00ZDiagram predicate framework: A formal approach to MDE
http://hdl.handle.net/1956/4469
Diagram predicate framework: A formal approach to MDE
Rutle, Adrian
Doctoral thesis
Model-driven engineering (MDE) is a software engineering discipline which
promotes models as first-class entities. It represents a shift of paradigm in software
development, from being code-centric to become model-centric. MDE
is an attempt to organise modelling, metamodelling and model transformation
in a well-structured engineering methodology. This thesis is all about formalisation
of MDE-concepts in a diagrammatic specification formalism which we
call Diagram Predicate Framework (DPF). DPF provides a formal diagrammatic
approach to (meta)modelling and model transformation based on category
theory. It is a generic graph-based specification framework that tends to
adapt first-order logic and categorical logic to software engineering needs.
This thesis is based on a sequence of publications and it is the intended
purpose of this thesis to consolidate the present state of development regarding
DPF. Some of the foundation for DPF was already under construction before
this work was initiated. The main contributions of this thesis are:
• An introduction to formal diagrammatic modelling and diagrammatic
constraints
• A comparison of some of the state-of-the-art modelling languages, techniques
and frameworks to DPF
• A neat, diagrammatic formalisation of the metamodelling hierarchy as
proposed by the Object Management Group
• A formal approach to model transformations and to constraint-awareness
in model transformation
• A formalisation of the fundamental concepts and processes of version
control in the context of MDE
This thesis is organised as follows. The first chapter is dedicated to introduce
MDE, its technological basis and challenges, and to motivate the formalisation
approach presented in this thesis. This introduction is meant as a
guide for newcomers to MDE, especially for theoreticians. The main part of
the thesis details DPF and its formal background. This part is more theoretically
oriented, and is meant to elucidate the formal foundation of the DPF
framework for software engineers. More precisely, DPF is presented as a formal
approach to (meta)modelling, model transformation and version control.
The last chapter is dedicated to a discussion of related work, further work and
concluding remarks.
The content of this thesis is neither purely theoretical nor purely practical;
rather it seeks to bridge the gap between these worlds. It provides a formal approach
to diagrammatic modelling, model transformation and version control
motivated and illustrated by practical examples. We introduce only the theoretical
elements which are necessary to investigate, formalise, and to solve
the practical problems. More precisely, we explicitly define the formal concepts
and constructions needed in order to understand the thesis, such as graph,
graph homomorphism, categories, pullback and pushout.
Mon, 29 Nov 2010 00:00:00 GMThttp://hdl.handle.net/1956/44692010-11-29T00:00:00ZStøtte for rike klienter i Dynamic Presentation Generator
http://hdl.handle.net/1956/4465
Støtte for rike klienter i Dynamic Presentation Generator
Skeidsvoll, Peder Lång
Master thesis
Denne oppgaven beskriver hvordan det kan legges til støtte for rike klienter i Dynamic Presentation Generator (DPG). Oppgaven tar for seg både server og klientsiden av temaet. DPG er et innholdshåndteringssystem utviklet av JAFU-prosjektet på Institutt for Informatikk ved UiB.
Tue, 01 Jun 2010 00:00:00 GMThttp://hdl.handle.net/1956/44652010-06-01T00:00:00ZBuilding Trust in Remote Internet Voting
http://hdl.handle.net/1956/4463
Building Trust in Remote Internet Voting
Nestås, Lars Hopland
Master thesis
During the past decades, a lot of research has been done to create voting protocols and election systems that facilitate voting via the Internet. Many universities and private organizations are now using such systems for referendums, or to elect individuals for its leading positions. Several countries are also moving towards electronic voting over the Internet for legally binding elections.
Cryptographers and system designers have until now mainly focused on the mathematical and the technical aspects of electronic voting systems. How to build trust in these systems have not been considered in any depth (as far as the author has been able to establish). Whenever there is a change in election procedures, voters', candidates', and election officials' trust in the election system may be challenged. In this thesis, we focus on different aspects of how to build and preserve trust in remote electronic voting systems.
The intended audience for this thesis is first and foremost students and IT professionals interested in remote electronic voting systems. However, the first two chapters, and several parts of Chapter 3 and Chapter 4, are not very technical oriented. Readers with some basic IT knowledge, and who are interested in the topic, are encouraged to continue reading.
Thu, 27 May 2010 00:00:00 GMThttp://hdl.handle.net/1956/44632010-05-27T00:00:00ZInteraksjon og Søk i Dynamic Presentation Generator
http://hdl.handle.net/1956/4462
Interaksjon og Søk i Dynamic Presentation Generator
Olsen, Tobias Rusås
Master thesis
Denne oppgaven handler om hvordan interaksjon og søk kan integreres i Dynamic Presentation Generator. Dette innholdshåndteringssystemet benytter seg av presentasjonsmønstre, en ide som går ut på å skille struktur og innhold. Innholdshåndteringssystemet har foreløpig ikke hatt støtte for at brukere skal komme med inndata (interaksjon) eller søk etter innhold. Oppgaven handler om hvordan dette er gjort mulig i den nye versjonen av Dynamic Presentation Generator.
Tue, 01 Jun 2010 00:00:00 GMThttp://hdl.handle.net/1956/44622010-06-01T00:00:00ZOn iterative decoding of high-density parity-check codes using edge-local complementation
http://hdl.handle.net/1956/4446
On iterative decoding of high-density parity-check codes using edge-local complementation
Knudsen, Joakim Grahl
Doctoral thesis
The overall topic of this work is a graph operation known as edgelocal
complementation (ELC) and its applications to iterative decoding of
classical codes. Although these legacy codes are arguably not well-suited
for graph-based decoding, they have other desirable properties resulting
in much current research on the general problem of forging this alloy.
From this perspective, these codes are typically referred to as highdensity
parity-check codes. Our approach is to gain diversity by means
of ELC. Based on the known link between ELC and the information
sets of a code, C, we identify a one-to-one relationship between ELC
operations and the automorphism group of a code, Aut(C). With respect
to a specific parity-check matrix, H, we classify these code-preserving
permutations into trivial and nontrivial permutations, based on whether
the matrix is preserved (under ELC) up to row permutations, or not.
The corresponding iso-ELC operations preserve the structure of the
graph, and simulation data are presented on the performance benefit of
using iso-ELC operations as a source of diversity. Generalizing this to
random (noniso) ELC, we explore the benefits of a simplified, entirely
graph-local (i.e., distributive) implementation of ELC-based decoding.
Special codes are chosen, which are structurally well-suited for this
type of random ELC decoding. At an extreme, certain codes are ELCpreserved,
in that any ELC operation is an iso-ELC operation. Although
less useful from a coding perspective, the corresponding graphs are
interesting to determine and classify. However, in the general case, we
observe negative effects of random ELC, which causes the weight of
the graph to increase. Based on this, we explore the specific structural
properties which determine which ELC operations (i.e., which edges
of a graph) are desirable for ELC, from a decoding perspective. This is
dominated by the weight increase (in terms of number of new edges)
resulting from ELC, which also causes detrimental short cycles, so we
identify the graphical conditions (i.e., subgraphs) for which ELC is
weight-bounding ELC (WB-ELC). These operations are used to reduce
the weight of a systematic H (given a code), and also used in a proposed
decoder based on WB-ELC. At a slightly different approach, we also
apply ELC operations in an adaptive decoding scheme. As ELC is a
local operation, we can, to a certain extent, target the (inferred) least
reliable positions specifically. This is a well-known technique to improve
decoding, normally implemented via Gaussian elimination, which we
improve by using beneficial properties of the ELC operation
Wed, 24 Nov 2010 00:00:00 GMThttp://hdl.handle.net/1956/44462010-11-24T00:00:00ZComparing 17 graph parameters
http://hdl.handle.net/1956/4329
Comparing 17 graph parameters
Sasák, Róbert
Master thesis
Many parametrized problems were decided to be FPT or W-hard. However, there is still thousands of problems and parameters for which we do not know yet whether are FPT or W-hard. In this thesis, we provide a tool for extending existing results to additional parametrized problems.
We use the comparison relation for comparing graph parameters, i.e. we say that parameter p1 is bounded by parameter p2 if exists a function f that for every graph G holds p1(G)<=f(p2(G)). This allows us to extend results for parametrized problem P in two ways. If problem P parametrized by p1 is FPT then also P parametrized by p2 is FPT. Or wise-versa, if problem P parametrized by p2 is W-hard then also P parametrized by p1 is W-hard.
Moreover, we show whether is a parameter bounded by another parameter for the 17 graph parameters: path-width, tree-width, branch-width, clique-width, rank-width, boolean-width, maximum independent set, minimum dominating set, vertex cover number, maximum clique, chromatic number, maximum matching, maximum induced matching, cut-width, carving-width, degeneracy and tree-depth. To avoid all 272 different comparisons, we introduce methodology for examining graph parameters which rapidly reduce number of comparisons. And finally, we provide comparison diagram for all 17 parameters.
Mon, 02 Aug 2010 00:00:00 GMThttp://hdl.handle.net/1956/43292010-08-02T00:00:00ZFeasible Algorithms for Semantics — Employing Automata and Inference Systems
http://hdl.handle.net/1956/4325
Feasible Algorithms for Semantics — Employing Automata and Inference Systems
Hovland, Dag
Doctoral thesis
Thu, 16 Dec 2010 00:00:00 GMThttp://hdl.handle.net/1956/43252010-12-16T00:00:00ZRisks in Networked Computer Systems
http://hdl.handle.net/1956/4200
Risks in Networked Computer Systems
Klingsheim, André N.
Doctoral thesis
Networked computer systems yield great value to businesses and governments, but also
create risks. The eight papers in this thesis highlight vulnerabilities in computer systems
that lead to security and privacy risks. A broad range of systems is discussed in this thesis:
Norwegian online banking systems, the Norwegian Automated Teller Machine (ATM)
system during the 90's, mobile phones, web applications, and wireless networks. One
paper also comments on legal risks to bank customers.
Wed, 24 Sep 2008 00:00:00 GMThttp://hdl.handle.net/1956/42002008-09-24T00:00:00ZPrediction and analysis of protein structure
http://hdl.handle.net/1956/4080
Prediction and analysis of protein structure
Hollup, Siv Midtun
Doctoral thesis
This thesis, which contains an introduction and four manuscripts, summarises
my efforts during my the past four years to understand proteins, their structure
and dynamics. The first manuscript presents a protocol that refines models
as part of a protein structure prediction pipeline. To achieve this, we used
spatial information from determined structures and sequence information from
multiple alignments. The protocol was used to improve the quality of rough
models containing only one point per residue.
In the second manuscript we investigated protein fold space. We compared
models with known fold to determined structures and found that out models
contained many folds that were not seen in the present pool of structures in
the PDB. Comparison of structural features revealed no reason why the model
folds could not exist.
We investigated how well geometric comparison methods distinguished fold
in the third manuscript. We presented a novel measure of topological similarity
and showed that geometric methods have trouble distinguishing fold differences
between both models and PDB structures.
In the last manuscript we showed that the architecture is the most important
factor for dynamics as measured by normal modes. Protein fold has some
effect and cannot be discarded completely, but larger differences in fold does
not necessarily correspond to larger differences in flexibility if the architecture
is the same.
Mon, 19 Apr 2010 00:00:00 GMThttp://hdl.handle.net/1956/40802010-04-19T00:00:00ZOLS Dialog: An open-source front end to the Ontology Lookup Service
http://hdl.handle.net/1956/4016
OLS Dialog: An open-source front end to the Ontology Lookup Service
Barsnes, Harald; Côté, Richard G.; Eidhammer, Ingvar; Martens, Lennart
Journal article; Peer reviewed
Background: With the growing amount of biomedical data available in public databases it has become
increasingly important to annotate data in a consistent way in order to allow easy access to this rich source of
information. Annotating the data using controlled vocabulary terms and ontologies makes it much easier to
compare and analyze data from different sources. However, finding the correct controlled vocabulary terms can
sometimes be a difficult task for the end user annotating these data.
Results: In order to facilitate the location of the correct term in the correct controlled vocabulary or ontology, the
Ontology Lookup Service was created. However, using the Ontology Lookup Service as a web service is not always
feasible, especially for researchers without bioinformatics support. We have therefore created a Java front end to
the Ontology Lookup Service, called the OLS Dialog, which can be plugged into any application requiring the
annotation of data using controlled vocabulary terms, making it possible to find and use controlled vocabulary
terms without requiring any additional knowledge about web services or ontology formats.
Conclusions: As a user-friendly open source front end to the Ontology Lookup Service, the OLS Dialog makes it
straightforward to include controlled vocabulary support in third-party tools, which ultimately makes the data even
more valuable to the biomedical community.
Sun, 17 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/40162010-01-17T00:00:00ZBlind search for post-translational modifications and amino acid substitutions using peptide mass fingerprints from two proteases
http://hdl.handle.net/1956/4015
Blind search for post-translational modifications and amino acid substitutions using peptide mass fingerprints from two proteases
Barsnes, Harald; Mikalsen, Svein-Ole; Eidhammer, Ingvar
Journal article; Peer reviewed
Background: Mass spectrometric analysis of peptides is an essential part of protein identification
and characterization, the latter meaning the identification of modifications and amino acid
substitutions. There are two main approaches for characterization: (i) using a predefined set of
possible modifications and substitutions or (ii) performing a blind search. The first option is
straightforward, but can not detect modifications or substitutions outside the predefined set. A
blind search does not have this limitation, and therefore has the potential of detecting both known
and unknown modifications and substitutions. Combining the peptide mass fingerprints from two
proteases result in overlapping sequence coverage of the protein, thereby offering alternative views
of the protein and a novel way of indicating post-translational modifications and amino acid
substitutions.
Results: We have developed an algorithm and a software tool, MassShiftFinder, that performs a
blind search using peptide mass fingerprints from two proteases with different cleavage specificities.
The algorithm is based on equal mass shifts for overlapping peptides from the two proteases used,
and can indicate both post-translational modifications and amino acid substitutions. In most cases
it is possible to suggest a restricted area within the overlapping peptides where the mass shift can
occur. The program is available at http://www.bioinfo.no/software/massShiftFinder.
Conclusion: Without any prior assumptions on their presence the described algorithm is able to
indicate post-translational modifications or amino acid substitutions in MALDI-TOF experiments
on identified proteins, and can thereby direct the involved peptides to subsequent TOF-TOF
analysis. The algorithm is designed for detailed and low-throughput characterization of single
proteins.
Fri, 19 Dec 2008 00:00:00 GMThttp://hdl.handle.net/1956/40152008-12-19T00:00:00ZProtease-dependent fractional mass and peptide properties
http://hdl.handle.net/1956/4014
Protease-dependent fractional mass and peptide properties
Barsnes, Harald; Eidhammer, Ingvar; Cruciani, Véronique; Mikalsen, Svein-Ole
Journal article; Peer reviewed
Mass spectrometric analyses of peptides mainly rely on cleavage of proteins with proteases that have a defined specificity. The specificities of the proteases imply that there is not a random distribution of amino acids in the peptides. The physico-chemical effects of this distribution have been partly analyzed for tryptic peptides, but to a lesser degree for other proteases. Using all human proteins in Swiss-Prot, the relationships between peptide fractional mass, pI and hydrophobicity were investigated. The distribution of the fractional masses and the average regression lines for the fractional masses were similar, but not identical, for the peptides generated by the proteases trypsin, chymotrypsin and gluC, with the steepest regression line for gluC. The fractional mass regression lines for individual proteins showed up to ±100 ppm in mass difference from the average regression line and the peptides generated showed protease-dependent properties. We here show that the fractional mass and some other properties of the peptides are dependent on the protease used for generating the peptides. With the increasing accuracy of mass spectrometry instruments it is possible to exploit the information embedded in the fractional mass of unknown peaks in peptide mass fingerprint spectra.
Mon, 22 Sep 2008 00:00:00 GMThttp://hdl.handle.net/1956/40142008-09-22T00:00:00ZMassSorter: a tool for administrating and analyzing data from mass spectrometry experiments on proteins with known amino acid sequences
http://hdl.handle.net/1956/4013
MassSorter: a tool for administrating and analyzing data from mass spectrometry experiments on proteins with known amino acid sequences
Barsnes, Harald; Mikalsen, Svein-Ole; Eidhammer, Ingvar
Journal article; Peer reviewed
Background: Proteomics is the study of the proteome, and is critical to the understanding of
cellular processes. Two central and related tasks of proteomics are protein identification and
protein characterization. Many small laboratories are interested in the characterization of a small
number of proteins, e.g., how posttranslational modifications change under different conditions.
Results: We have developed a software tool called MassSorter for administrating and analyzing
data from peptide mass fingerprinting experiments on proteins with known amino acid sequences.
It is meant for small scale mass spectrometry laboratories that are interested in posttranslational
modifications of known proteins. Several experiments can be compared simultaneously, and the
matched and unmatched peak values are clearly indicated. The hits can be sorted according to m/
z values (default) or according to the sequence of the protein. Filters defined by the user can mark
autolytic protease peaks and other contaminating peaks (keratins, proteins co-migrating with the
protein of interest, etc.). Unmatched peaks can be further analyzed for unexpected modifications
by searches against a local version of the UniMod database. They can also be analyzed for
unexpected cleavages, a highly useful feature for proteins that undergo maturation by proteolytic
cleavage, creating new N- or C-terminals. Additional tools exist for visualization of the results, like
sequence coverage, accuracy plots, different types of statistics, 3D models, etc. The program and
a tutorial are freely available for academic users at http://www.bioinfo.no/software/massSorter.
Conclusion: MassSorter has a number of useful features that can promote the analysis and
administration of MS-data.
Thu, 26 Jan 2006 00:00:00 GMThttp://hdl.handle.net/1956/40132006-01-26T00:00:00ZDevelopment of Tools for Analyzing and Sharing Proteomics Data
http://hdl.handle.net/1956/4012
Development of Tools for Analyzing and Sharing Proteomics Data
Barsnes, Harald
Doctoral thesis
Mon, 22 Mar 2010 00:00:00 GMThttp://hdl.handle.net/1956/40122010-03-22T00:00:00ZAssessing and Mitigating Risks in Computer Systems
http://hdl.handle.net/1956/4004
Assessing and Mitigating Risks in Computer Systems
Netland, Lars-Helge
Doctoral thesis
When it comes to non-trivial networked computer systems, bulletproof security is very
hard to achieve. Over a system's lifetime new security risks are likely to emerge from e.g.
newly discovered classes of vulnerabilities or the arrival of new threat agents. Given the
dynamic environment in which computer systems are deployed, continuous evaluations
and adjustments are wiser than one-shot e orts for perfection. Security risk management
focuses on assessing and treating security risks against computer systems. In this thesis,
elements from risk management are applied to two real-world systems to identify, evaluate,
and mitigate risks. One of the pinpointed weaknesses is studied in-depth to produce an
exploit against the a ected system. In addition, approaches to handle common software
security problems are described.
Fri, 26 Sep 2008 00:00:00 GMThttp://hdl.handle.net/1956/40042008-09-26T00:00:00ZThe SHIP Validator: An Annotation-based Content-Validation Framework for Java Applications
http://hdl.handle.net/1956/3973
The SHIP Validator: An Annotation-based Content-Validation Framework for Java Applications
Mancini, Federico; Hovland, Dag; Mughal, Khalid A.
Peer reviewed; Conference object
In this paper, we investigate the use of Java
annotations for software security purposes. In particular, we
implement a framework for content validation where the
validation tests are specified by annotations. This approach
allows to tag what properties to validate directly in the
application code and eliminates the need for external XML
configuration files. Furthermore, the testing code is still kept
separate from the application code, hence facilitating the
creation and reuse of custom tests. The main novelty of this
framework consists in the possibility of defining tests for
the validation of multiple and interdependent properties. The
flexibility and reusability of tests are also improved by allowing
composition and boolean expressions. The main result of the
paper is a flexible framework for content-validation based on
Java annotations.
ICIW 2010, 9-15 May 2010, Barcelona, Spain
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/39732010-01-01T00:00:00ZThe Inclusion Problem for Regular Expressions
http://hdl.handle.net/1956/3956
The Inclusion Problem for Regular Expressions
Hovland, Dag
Peer reviewed; Chapter
This paper presents a new polynomial-time algorithm for the
inclusion problem for certain pairs of regular expressions. The algorithm
is not based on construction of finite automata, and can therefore be
faster than the lower bound implied by the Myhill-Nerode theorem. The
algorithm automatically discards unnecessary parts of the right-hand
expression. In these cases the right-hand expression might even be 1-
ambiguous. For example, if r is a regular expression such that any DFA
recognizing r is very large, the algorithm can still, in time independent of
r, decide that the language of ab is included in that of (a+r)b. The algorithm
is based on a syntax-directed inference system. It takes arbitrary
regular expressions as input, and if the 1-ambiguity of the right-hand
expression becomes a problem, the algorithm will report this.
Proceedings from the 4th International Conference, LATA 2010
Trier, Germany, May 24-28, 2010
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/39562010-01-01T00:00:00ZInvestigating the Limitations of Java Annotations for Input Validation
http://hdl.handle.net/1956/3799
Investigating the Limitations of Java Annotations for Input Validation
Mancini, Federico; Hovland, Dag; Mughal, Khalid A.
Conference object
Recently Java annotations have received
a lot of attention as a possible way to simplify the usage
of various frameworks, ranging from persistence
and verification to security. In this paper we discuss
our experiences in implementing an annotation framework
for input validation purposes. We investigate
the advantages and more importantly their limitations
in the design of validation tests. We conclude that
annotations are a good choice for specifying common
validation tests. However, the limitations of annotations
have an impact on creating and using generic
tests and tests involving multiple properties.
Fri, 01 Jan 2010 00:00:00 GMThttp://hdl.handle.net/1956/37992010-01-01T00:00:00Z