# A melhor ferramenta para a sua pesquisa, trabalho e TCC!

Página 18 dos resultados de 10730 itens digitais encontrados em 0.059 segundos

## Proceedings of the 11th workshop on Quantum Physics and Logic

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 27/12/2014
Português

Relevância na Pesquisa

467.49168%

#Computer Science - Logic in Computer Science#Computer Science - Computation and Language#Computer Science - Programming Languages#Quantum Physics

This volume contains the proceedings of the 11th International Workshop on
Quantum Physics and Logic (QPL 2014), which was held from the 4th to the 6th of
June, 2014, at Kyoto University, Japan.
The goal of the QPL workshop series is to bring together researchers working
on mathematical foundations of quantum physics, quantum computing and
spatio-temporal causal structures, and in particular those that use logical
tools, ordered algebraic and category-theoretic structures, formal languages,
semantic methods and other computer science methods for the study of physical
behavior in general. Over the past few years, there has been growing activity
in these foundational approaches, together with a renewed interest in the
foundations of quantum theory, which complement the more mainstream research in
quantum computation. Earlier workshops in this series, with the same acronym
under the name "Quantum Programming Languages", were held in Ottawa (2003),
Turku (2004), Chicago (2005), and Oxford (2006). The first QPL under the new
name Quantum Physics and Logic was held in Reykjavik (2008), followed by Oxford
(2009 and 2010), Nijmegen (2011), Brussels (2012) and Barcelona (2013).

Link permanente para citações:

## Graph Products Revisited: Tight Approximation Hardness of Induced Matching, Poset Dimension and More

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.49168%

#Computer Science - Discrete Mathematics#Computer Science - Computational Complexity#Computer Science - Data Structures and Algorithms#Mathematics - Combinatorics#F.2#G.2.2

Graph product is a fundamental tool with rich applications in both graph
theory and theoretical computer science. It is usually studied in the form
$f(G*H)$ where $G$ and $H$ are graphs, * is a graph product and $f$ is a graph
property. For example, if $f$ is the independence number and * is the
disjunctive product, then the product is known to be multiplicative:
$f(G*H)=f(G)f(H)$.
In this paper, we study graph products in the following non-standard form:
$f((G\oplus H)*J)$ where $G$, $H$ and $J$ are graphs, $\oplus$ and * are two
different graph products and $f$ is a graph property. We show that if $f$ is
the induced and semi-induced matching number, then for some products $\oplus$
and *, it is subadditive in the sense that $f((G\oplus H)*J)\leq
f(G*J)+f(H*J)$. Moreover, when $f$ is the poset dimension number, it is almost
subadditive.
As applications of this result (we only need $J=K_2$ here), we obtain tight
hardness of approximation for various problems in discrete mathematics and
computer science: bipartite induced and semi-induced matching (a.k.a. maximum
expanding sequences), poset dimension, maximum feasible subsystem with 0/1
coefficients, unit-demand min-buying and single-minded pricing, donation center
location, boxicity...

Link permanente para citações:

## Proceedings International Workshop on Interactions, Games and Protocols

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 17/02/2011
Português

Relevância na Pesquisa

467.49168%

#Computer Science - Computer Science and Game Theory#Computer Science - Distributed, Parallel, and Cluster Computing

The focus of the iWIGP workshop is the interrelation between interactions,
games and protocols. How does computer science deal with nondeterministic
interactions where the actions a system takes are not (completely) determined
by the interactions the system is involved in? In computer science,
nondeterministic interactions are usually described by protocols. However,
these interactions can also be viewed as games. As to be expected, games have
become an increasingly important modeling tool wherever nondeterministic
interactions are involved -- from foundations in game semantics and reactive
systems to applications in communication protocols and electronic business
applications. The goal of this workshop has been to bring researchers from
industry and academia together and to explore how a better understanding of the
interrelation between interactions, games and protocols leads to
better-designed and more reliable nondeterministic interacting systems.
iWIGP 2011 was collocated with ETAPS 2011 in Saarbruecken, Germany. The
programme consisted of three invited talks, by Kim Larsen, Marielle Stoelinga
and Viktor Kuncak, and five refereed papers, selected by a strong programme
committee of international reputation. The refereed papers are contained in
this volume.

Link permanente para citações:

## Implications of computer science principles for quantum physics

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 02/07/2014
Português

Relevância na Pesquisa

467.4937%

The Church-Turing thesis is one of the pillars of computer science; it
postulates that every classical system has equivalent computability power to
the so-called Turing machine. While this thesis is crucial for our
understanding of computing devices, its implications in other scientific fields
have hardly been explored. Here we start this research programme in the context
of quantum physics and show that computer science laws have profound
implications for some of the most fundamental results of the theory. We first
show how they question our knowledge on what a mixed quantum state is, as we
identify situations in which ensembles of quantum states defining the same
mixed state, indistinguishable according to the quantum postulates, do become
distinguishable when prepared by a computer. We also show a new loophole for
Bell-like experiments: if some of the parties in a Bell-like experiment use a
computer to decide which measurements to make, then the computational resources
of an eavesdropper have to be limited in order to have a proper observation of
non-locality. Our work opens a new direction in the search for a framework
unifying computer science and quantum physics.; Comment: 4 pages+8 pages appendix, 4 figures

Link permanente para citações:

## Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 08/12/2014
Português

Relevância na Pesquisa

467.49168%

Polynomial approximations to boolean functions have led to many positive
results in computer science. In particular, polynomial approximations to the
sign function underly algorithms for agnostically learning halfspaces, as well
as pseudorandom generators for halfspaces. In this work, we investigate the
limits of these techniques by proving inapproximability results for the sign
function.
Firstly, the polynomial regression algorithm of Kalai et al. (SIAM J. Comput.
2008) shows that halfspaces can be learned with respect to log-concave
distributions on $\mathbb{R}^n$ in the challenging agnostic learning model. The
power of this algorithm relies on the fact that under log-concave
distributions, halfspaces can be approximated arbitrarily well by low-degree
polynomials. We ask whether this technique can be extended beyond log-concave
distributions, and establish a negative result. We show that polynomials of any
degree cannot approximate the sign function to within arbitrarily low error for
a large class of non-log-concave distributions on the real line, including
those with densities proportional to $\exp(-|x|^{0.99})$.
Secondly, we investigate the derandomization of Chernoff-type concentration
inequalities. Chernoff-type tail bounds on sums of independent random variables
have pervasive applications in theoretical computer science. Schmidt et al.
(SIAM J. Discrete Math. 1995) showed that these inequalities can be established
for sums of random variables with only $O(\log(1/\delta))$-wise independence...

Link permanente para citações:

## TRX: A Formally Verified Parser Interpreter

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.49168%

#Computer Science - Logic in Computer Science#Computer Science - Formal Languages and Automata Theory#Computer Science - Programming Languages#D.3.4, D.2.4, F.3.1, F.4.2

Parsing is an important problem in computer science and yet surprisingly
little attention has been devoted to its formal verification. In this paper, we
present TRX: a parser interpreter formally developed in the proof assistant
Coq, capable of producing formally correct parsers. We are using parsing
expression grammars (PEGs), a formalism essentially representing recursive
descent parsing, which we consider an attractive alternative to context-free
grammars (CFGs). From this formalization we can extract a parser for an
arbitrary PEG grammar with the warranty of total correctness, i.e., the
resulting parser is terminating and correct with respect to its grammar and the
semantics of PEGs; both properties formally proven in Coq.; Comment: 26 pages, LMCS

Link permanente para citações:

## Kripke Semantics for Martin-L\"of's Extensional Type Theory

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.60227%

It is well-known that simple type theory is complete with respect to
non-standard set-valued models. Completeness for standard models only holds
with respect to certain extended classes of models, e.g., the class of
cartesian closed categories. Similarly, dependent type theory is complete for
locally cartesian closed categories. However, it is usually difficult to
establish the coherence of interpretations of dependent type theory, i.e., to
show that the interpretations of equal expressions are indeed equal. Several
classes of models have been used to remedy this problem. We contribute to this
investigation by giving a semantics that is standard, coherent, and
sufficiently general for completeness while remaining relatively easy to
compute with. Our models interpret types of Martin-L\"of's extensional
dependent type theory as sets indexed over posets or, equivalently, as
fibrations over posets. This semantics can be seen as a generalization to
dependent type theory of the interpretation of intuitionistic first-order logic
in Kripke models. This yields a simple coherent model theory, with respect to
which simple and dependent type theory are sound and complete.

Link permanente para citações:

## Parallel Computation Is ESS

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.49168%

#Computer Science - Learning#Computer Science - Artificial Intelligence#Computer Science - Computer Science and Game Theory#68Q32, 68Q05, 68T05, 68Q10, 62P10, 97M60, 92D15, 91A80

There are enormous amount of examples of Computation in nature, exemplified
across multiple species in biology. One crucial aim for these computations
across all life forms their ability to learn and thereby increase the chance of
their survival. In the current paper a formal definition of autonomous learning
is proposed. From that definition we establish a Turing Machine model for
learning, where rule tables can be added or deleted, but can not be modified.
Sequential and parallel implementations of this model are discussed. It is
found that for general purpose learning based on this model, the
implementations capable of parallel execution would be evolutionarily stable.
This is proposed to be of the reasons why in Nature parallelism in computation
is found in abundance.; Comment: Submitted to Theoretical Computer Science - Elsevier

Link permanente para citações:

## Stackelberg Pricing is Hard to Approximate within $2-\epsilon$

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 02/10/2009
Português

Relevância na Pesquisa

467.49168%

#Computer Science - Computer Science and Game Theory#Computer Science - Computational Complexity#Computer Science - Data Structures and Algorithms#F.2.2

Stackelberg Pricing Games is a two-level combinatorial pricing problem
studied in the Economics, Operation Research, and Computer Science communities.
In this paper, we consider the decade-old shortest path version of this problem
which is the first and most studied problem in this family.
The game is played on a graph (representing a network) consisting of {\em
fixed cost} edges and {\em pricable} or {\em variable cost} edges. The fixed
cost edges already have some fixed price (representing the competitor's
prices). Our task is to choose prices for the variable cost edges. After that,
a client will buy the cheapest path from a node $s$ to a node $t$, using any
combination of fixed cost and variable cost edges. The goal is to maximize the
revenue on variable cost edges.
In this paper, we show that the problem is hard to approximate within
$2-\epsilon$, improving the previous \APX-hardness result by Joret [to appear
in {\em Networks}]. Our technique combines the existing ideas with a new
insight into the price structure and its relation to the hardness of the
instances.

Link permanente para citações:

## Decidability of higher-order matching

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.49168%

#Computer Science - Logic in Computer Science#Computer Science - Computer Science and Game Theory#F.4.1

We show that the higher-order matching problem is decidable using a
game-theoretic argument.; Comment: appears in LMCS (Logical Methods in Computer Science)

Link permanente para citações:

## Tipi: A TPTP-based theory development environment emphasizing proof analysis

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.60227%

In some theory development tasks, a problem is satisfactorily solved once it
is shown that a theorem (conjecture) is derivable from the background theory
(premises). Depending on one's motivations, the details of the derivation of
the conjecture from the premises may or may not be important. In some contexts,
though, one wants more from theory development than simply derivability of the
target theorems from the background theory. One may want to know which premises
of the background theory were used in the course of a proof output by an
automated theorem prover (when a proof is available), whether they are all, in
suitable senses, necessary (and why), whether alternative proofs can be found,
and so forth. The problem, then, is to support proof analysis in theory
development; the tool described in this paper, Tipi, aims to provide precisely
that.; Comment: 10 pages, 3 tables. Submitted to ATX 2012 (Automated Theory
Exploration, http://dream.inf.ed.ac.uk/events/atx2012/)

Link permanente para citações:

## Confronting Intractability via Parameters

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.60227%

One approach to confronting computational hardness is to try to understand
the contribution of various parameters to the running time of algorithms and
the complexity of computational tasks. Almost no computational tasks in real
life are specified by their size alone. It is not hard to imagine that some
parameters contribute more intractability than others and it seems reasonable
to develop a theory of computational complexity which seeks to exploit this
fact. Such a theory should be able to address the needs of practicioners in
algorithmics. The last twenty years have seen the development of such a theory.
This theory has a large number of successes in terms of a rich collection of
algorithmic techniques both practical and theoretical, and a fine-grained
intractability theory. Whilst the theory has been widely used in a number of
areas of applications including computational biology, linguistics, VLSI
design, learning theory and many others, knowledge of the area is highly
varied. We hope that this article will show both the basic theory and point at
the wide array of techniques available. Naturally the treatment is condensed,
and the reader who wants more should go to the texts, Downey and Fellows, Flum
and Grohe, Niedermeier, and the upcoming undergraduate text Downey and Fellows.; Comment: Accepted for publication in Computer Science Review [with some
additional corrections]

Link permanente para citações:

## On the Complexity of Core, Kernel, and Bargaining Set

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.49168%

#Computer Science - Computer Science and Game Theory#Computer Science - Artificial Intelligence#Computer Science - Computational Complexity#F.2#J.4

Coalitional games are mathematical models suited to analyze scenarios where
players can collaborate by forming coalitions in order to obtain higher worths
than by acting in isolation. A fundamental problem for coalitional games is to
single out the most desirable outcomes in terms of appropriate notions of worth
distributions, which are usually called solution concepts. Motivated by the
fact that decisions taken by realistic players cannot involve unbounded
resources, recent computer science literature reconsidered the definition of
such concepts by advocating the relevance of assessing the amount of resources
needed for their computation in terms of their computational complexity. By
following this avenue of research, the paper provides a complete picture of the
complexity issues arising with three prominent solution concepts for
coalitional games with transferable utility, namely, the core, the kernel, and
the bargaining set, whenever the game worth-function is represented in some
reasonable compact form (otherwise, if the worths of all coalitions are
explicitly listed, the input sizes are so large that complexity problems
are---artificially---trivial). The starting investigation point is the setting
of graph games, about which various open questions were stated in the
literature. The paper gives an answer to these questions...

Link permanente para citações:

## Proceedings First Workshop on GRAPH Inspection and Traversal Engineering

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 22/10/2012
Português

Relevância na Pesquisa

467.49168%

#Computer Science - Data Structures and Algorithms#Computer Science - Discrete Mathematics#Computer Science - Logic in Computer Science#D.1.3#D.2.4#G.2.2#I.2.8

These are the proceedings of the First Workshop on GRAPH Inspection and
Traversal Engineering (GRAPHITE 2012), which took place on April 1, 2012 in
Tallinn, Estonia, as a satellite event of the 15th European Joint Conferences
on Theory and Practice of Software (ETAPS 2012).
The topic of the GRAPHITE workshop is graph search in all its forms in
computer science. Graph search algorithms tend to have common characteristics,
such as duplicate state detection, independent of their application domain.
Over the past few years, it has been shown that the scalability of such
algorithms can be dramatically improved by using, e.g., external memory, by
exploiting parallel architectures, such as clusters, multi-core CPUs, and
graphics processing units, and by using heuristics to guide the search. The
goal of this event is to gather scientists from different communities, such as
model checking, artificial intelligence planning, game playing, and algorithm
engineering, who do research on graph search algorithms, such that awareness of
each others' work is increased.

Link permanente para citações:

## Convergence to Equilibrium of Logit Dynamics for Strategic Games

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.49168%

#Computer Science - Computer Science and Game Theory#Computer Science - Discrete Mathematics#Computer Science - Data Structures and Algorithms

We present the first general bounds on the mixing time of the Markov chain
associated to the logit dynamics for wide classes of strategic games. The logit
dynamics with inverse noise beta describes the behavior of a complex system
whose individual components act selfishly and keep responding according to some
partial ("noisy") knowledge of the system, where the capacity of the agent to
know the system and compute her best move is measured by the inverse of the
parameter beta.
In particular, we prove nearly tight bounds for potential games and games
with dominant strategies. Our results show that, for potential games, the
mixing time is upper and lower bounded by an exponential in the inverse of the
noise and in the maximum potential difference. Instead, for games with dominant
strategies, the mixing time cannot grow arbitrarily with the inverse of the
noise.
Finally, we refine our analysis for a subclass of potential games called
graphical coordination games, a class of games that have been previously
studied in Physics and, more recently, in Computer Science in the context of
diffusion of new technologies. We give evidence that the mixing time of the
logit dynamics for these games strongly depends on the structure of the
underlying graph. We prove that the mixing time of the logit dynamics for these
games can be upper bounded by a function that is exponential in the cutwidth of
the underlying graph and in the inverse of noise. Moreover...

Link permanente para citações:

## Probabilistic Inductive Inference:a Survey

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 14/02/1999
Português

Relevância na Pesquisa

467.49168%

#Computer Science - Learning#Computer Science - Computational Complexity#Computer Science - Logic in Computer Science#Mathematics - Logic#F.1.1., F.4.1., I.2.3., I.2.6

Inductive inference is a recursion-theoretic theory of learning, first
developed by E. M. Gold (1967). This paper surveys developments in
probabilistic inductive inference. We mainly focus on finite inference of
recursive functions, since this simple paradigm has produced the most
interesting (and most complex) results.; Comment: 16 pages, to appear in Theoretical Computer Science

Link permanente para citações:

## Attribute Exploration of Gene Regulatory Processes

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 09/04/2012
Português

Relevância na Pesquisa

467.49168%

#Quantitative Biology - Molecular Networks#Computer Science - Computational Engineering, Finance, and Science#Computer Science - Logic in Computer Science#Mathematics - Logic

This thesis aims at the logical analysis of discrete processes, in particular
of such generated by gene regulatory networks. States, transitions and
operators from temporal logics are expressed in the language of Formal Concept
Analysis. By the attribute exploration algorithm, an expert or a computer
program is enabled to validate a minimal and complete set of implications, e.g.
by comparison of predictions derived from literature with observed data. Here,
these rules represent temporal dependencies within gene regulatory networks
including coexpression of genes, reachability of states, invariants or possible
causal relationships. This new approach is embedded into the theory of
universal coalgebras, particularly automata, Kripke structures and Labelled
Transition Systems. A comparison with the temporal expressivity of Description
Logics is made. The main theoretical results concern the integration of
background knowledge into the successive exploration of the defined data
structures (formal contexts). Applying the method a Boolean network from
literature modelling sporulation of Bacillus subtilis is examined. Finally, we
developed an asynchronous Boolean network for extracellular matrix formation
and destruction in the context of rheumatoid arthritis.; Comment: 111 pages...

Link permanente para citações:

## Reachability in Two-Dimensional Vector Addition Systems with States is PSPACE-complete

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Publicado em 13/12/2014
Português

Relevância na Pesquisa

467.49168%

#Computer Science - Formal Languages and Automata Theory#Computer Science - Computational Complexity#Computer Science - Logic in Computer Science#F.1.1#F.1.3

Determining the complexity of the reachability problem for vector addition
systems with states (VASS) is a long-standing open problem in computer science.
Long known to be decidable, the problem to this day lacks any complexity upper
bound whatsoever. In this paper, reachability for two-dimensional VASS is shown
PSPACE-complete. This improves on a previously known doubly exponential time
bound established by Howell, Rosier, Huynh and Yen in 1986. The coverability
and boundedness problems are also noted to be PSPACE-complete. In addition,
some complexity results are given for the reachability problem in
two-dimensional VASS and in integer VASS when numbers are encoded in unary.; Comment: 27 pages, 8 figures

Link permanente para citações:

## Finite-Memory Strategy Synthesis for Robust Multidimensional Mean-Payoff Objectives

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.49168%

Two-player games on graphs provide the mathematical foundation for the study
of reactive systems. In the quantitative framework, an objective assigns a
value to every play, and the goal of player 1 is to minimize the value of the
objective. In this framework, there are two relevant synthesis problems to
consider: the quantitative analysis problem is to compute the minimal (or
infimum) value that player 1 can assure, and the boolean analysis problem asks
whether player 1 can assure that the value of the objective is at most $\nu$
(for a given threshold $\nu$). Mean-payoff expression games are played on a
multidimensional weighted graph. An atomic mean-payoff expression objective is
the mean-payoff value (the long-run average weight) of a certain dimension, and
the class of mean-payoff expressions is the closure of atomic mean-payoff
expressions under the algebraic operations of $\MAX,\MIN$, numerical complement
and $\SUM$. In this work, we study for the first time the strategy synthesis
problems for games with robust quantitative objectives, namely, games with
mean-payoff expression objectives. While in general, optimal strategies for
these games require infinite-memory, in synthesis we are typically interested
in the construction of a finite-state system. Hence...

Link permanente para citações:

## Computer-aided verification in mechanism design

Fonte: Universidade Cornell
Publicador: Universidade Cornell

Tipo: Artigo de Revista Científica

Português

Relevância na Pesquisa

467.4937%

In mechanism design, the gold standard solution concepts are dominant
strategy incentive compatibility, and Bayesian incentive compatibility. These
simple solution concepts relieve the (possibly unsophisticated) bidders from
the need to engage in complicated strategizing. This is a clean story when the
mechanism is "obviously" incentive compatible, as with a simple second price
auction. However, when the proof of incentive compatibility is complex,
unsophisticated agents may strategize in unpredictable ways if they are not
convinced of the incentive properties. In practice, this concern may limit the
mechanism designer to mechanisms where the incentive properties are obvious to
all agents.
To alleviate this problem, we propose to use techniques from computer-aided
verification to construct formal proofs of incentive properties. Because formal
proofs can be automatically checked, agents do not need to manually verify or
even understand complicated paper proofs.
To confirm the viability of this approach, we present the verification of one
sophisticated mechanism: the generic reduction from Bayesian incentive
compatible mechanism design to algorithm design given by Hartline, Kleinberg,
and Malekian. This mechanism presents new challenges for formal verification...

Link permanente para citações: