A concepção original da arquitetura da Internet foi baseada em uma rede fixa e confiável. Hoje em dia, a Internet se tornou dinâmica e vulnerável aos ataques de segurança. Também não era prevista a necessidade de integração de tecnologias heterogêneas nem de ambientes sem fio. A arquitetura atual apresenta uma série de barreiras técnicas para prover estes serviços, sendo uma das maiores a sobrecarga semântica do Internet Protocol (IP). O endereço IP atua como localizador na camada de rede e como identificador na camada de transporte, impossibilitando novas funcionalidades como a mobilidade e abrindo brechas de segurança. Este trabalho apresenta uma proposta de implementação de uma arquitetura para Internet de nova geração para o provisionamento de novos serviços de forma natural e integrada para a Internet atual. A proposta de arquitetura de implementação oferece suporte à mobilidade, ao multihoming, à segurança, à integração de redes heterogêneas e às aplicações legadas através da introdução de uma nova camada de identificação na arquitetura atual. Esta nova camada tem por objetivo separar a identidade da localização e se tornar uma opção de comunicação para as redes heterogêneas. Mecanismos adicionais foram propostos para prover o suporte às funcionalidades da arquitetura...
In this dissertation, we propose a new architecture for Internet congestion control that decouples the control of congestion from the bandwidth allocation policy. We show that the new protocol, called XCP, enables very large per-flow throughput (e.g., more than 1 Gb/s), which is unachievable using current congestion control. Additionally, we show via extensive simulations that XCP significantly improves the overall performance, reducing drop rate by three orders of magnitude, increasing utilization, decreasing queuing delay, and attaining fairness in a few RTTs. Using tools from control theory, we model XCP and demonstrate that, in steady state, it is stable for any capacity, delay, and number of sources. XCP does not maintain any per-flow state in routers and requires only a few CPU cycles per packet making it implementable in high-speed routers. Its flexible architecture facilitates the design and implementation of quality of service, such as guaranteed and proportional bandwidth allocations. Finally, XCP is amenable to gradual deployment.; by Dina Katabi.; Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.; Includes bibliographical references (p. 124-129).
Conventional high speed Internet routers are built using custom designed microprocessors, dubbed network processors, to efficiently handle the task of packet routing. While capable of meeting the performance demanded of them, these custom network processors generally lack the flexibility to incorporate new features and do not scale well beyond that for which they were designed. Furthermore, they tend to suffer from long and costly development cycles, since each new generation must be redesigned to support new features and fabricated anew in hardware. This thesis presents a new design for a network processor, one implemented entirely in software, on a tiled, general purpose microprocessor. The network processor is implemented on the Raw microprocessor, a general purpose microchip developed by the Computer Architecture Group at MIT. The Raw chip consists of sixteen identical processing tiles arranged in a four by four matrix and connected by four inter-tile communication networks; the Raw chip is designed to be able to scale up merely by adding more tiles to the matrix. By taking advantage of the parallelism inherent in the task of packet forwarding on this inherently parallel microprocessor, the Raw network processor is able to achieve performance that matches or exceeds that of commercially available custom designed network processors. At the same time...
This thesis presents a new approach to root cause localization and fault diagnosis in the Internet based on a Common Architecture for Probabilistic Reasoning in the Internet (CAPRI) in which distributed, heterogeneous diagnostic agents efficiently conduct diagnostic tests and communicate observations, beliefs, and knowledge to probabilistically infer the cause of network failures. Unlike previous systems that can only diagnose a limited set of network component failures using a limited set of diagnostic tests, CAPRI provides a common, extensible architecture for distributed diagnosis that allows experts to improve the system by adding new diagnostic tests and new dependency knowledge. To support distributed diagnosis using new tests and knowledge, CAPRI must overcome several challenges including the extensible representation and communication of diagnostic information, the description of diagnostic agent capabilities, and efficient distributed inference. Furthermore, the architecture must scale to support diagnosis of a large number of failures using many diagnostic agents.; (cont.) To address these challenges, this thesis presents a probabilistic approach to diagnosis based on an extensible, distributed component ontology to support the definition of new classes of components and diagnostic tests; a service description language for describing new diagnostic capabilities in terms of their inputs and outputs; and a message processing procedure for dynamically incorporating new information from other agents...
In designing and building a network like the Internet, we continue to face the problems of scale and distribution. With the dramatic expansion in scale and heterogeneity of the Internet, network management has become an increasingly difficult task. Furthermore, network applications often need to maintain efficient organization among the participants by collecting information from the underlying networks. Such individual information collection activities lead to duplicate efforts and contention for network resources. The Knowledge Plane (KP) is a new common construct that provides knowledge and expertise to meet the functional, policy and scaling requirements of network management, as well as to create synergy and exploit commonality among many network applications. To achieve these goals, we face many challenging problems, including widely distributed data collection, efficient processing of that data, wide availability of the expertise, etc. In this thesis, to provide better support for network management and large-scale network applications, I propose a knowledge plane architecture that consists of a network knowledge plane (NetKP) at the network layer, and on top of it, multiple specialized KPs (spec-KPs). The NetKP organizes agents to provide valuable knowledge and facilities about the Internet to the spec-KPs. Each spec-KP is specialized in its own area of interest. In both the NetKP and the spec-KPs...
Approved for public release, distribution is unlimited; Internet Protocol version six (IPv6), the next generation Internet Protocol, exists sparsely in today's world. However, as it gains popularity, it will grow into a vital part of the Internet and communications technology in general. Many large organizations, including the Department of Defense, are working toward deploying IPv6 in many varied applications. This thesis focuses on the design and implementation issues that accompany a migration from Internet Protocol version four (IPv4) to IPv6 in the Monterey Security Enhanced Architecture (MYSEA). The research for this thesis consists of two major parts: a functional comparison between the IPv6 and IPv4 designs, and a prototype implementation of MYSEA with IPv6. The current MYSEA prototype relies on a subset of Network Address Translation (NAT) functionality to support the network's operation; and, due to the fact that IPv6 has no native support for NAT, this work also requires the creation of a similar mechanism for IPv6. This thesis provides a preliminary examination of IPv6 in MYSEA, which is a necessary step in determining whether the new protocol will assist with or detract from the enforcement of MYSEA policies.; Ensign...
Approved for public release, distribution unlimited; Server and Agent-based Active Network Management (SAAM) is a promising network management solution for the Internet of tomorrow, "Next Generation Internet (NGI)." SAAM is a new network architecture that incorporates many of the latest features of Internet technologies. The primary purpose of SAAM is managing network quality of service (QoS) to support the resource-intensive next-generation Internet applications. Best effort (BE) traffic will continue to exist in the era of NGI. Thus SAAM must be able to manage such traffic. In this thesis, we propose a solution for management of BE traffic within SAAM. With SAAM, it is possible to make a "better best effort" in routing BE packets. Currently, routers handle BE traffic based solely on local information or from information obtained by linkstate flooding which may not be reliable. In contrast, SAAM centralizes management at a server where better (more optimal) decisions can be made. SAAM's servers have access to accurate topology and timely traffic-condition information. Additionally, due to their placement on high-end routers or dedicated machines, the servers can better afford computationally intensive routing solutions. It is these characteristics that are exploited by the solution design and implementation of this thesis.; Lieutenant...
In the field of microelectronics, a device simulator is an important engineering tool with tremendous educational value. With a device simulator, a student can examine the characteristics of a microelectronic device described by a particular model. This makes it easier to develop an intuition for the general behavior of that device and examine the impact of particular device parameters on device characteristics. In this thesis, we designed and implemented the MIT Device Simulation WebLab ("WeblabSim"), an online simulator for exploring the behavior of microelectronic devices. WeblabSim makes a device simulator readily available to users on the web anywhere, and at any time. Through a Java applet interface, a user connected to the Internet specifies and submits a simulation to the system. A program performs the simulation on a computer that can be located anywhere else on the Internet. The results are then sent back to the user's applet for graphing and further analysis. The WeblabSim system uses a three-tier design based on the iLab Batched Experiment Architecture. It consists of a client applet that lets users configure simulations, a laboratory server that runs them, and a generic service broker that mediates between the two through SOAP-based web services. We have implemented a graphical client applet...
(cont.) selectively delegates authority to processes running on remote machines that need to access other resources. The delegation mechanism lets users incrementally construct trust policies for remote machines. Measurements of the system demonstrate that the modularity of REX's architecture does not come at the cost of performance.; A challenge in today's Internet is providing easy collaboration across administrative boundaries. Using and sharing resources between individuals in different administrative domains should be just as easy and secure as sharing them within a single domain. This thesis presents a new authentication service and a new remote login and execution utility that address this challenge. The authentication service contributes a new design point in the space of user authentication systems. The system provides the flexibility to create cross-domain groups in the context of a global, network file system using a familiar, intuitive interface for sharing files that is similar to local access control mechanisms. The system trades off freshness for availability by pre-fetching and caching remote users and groups defined in other administrative domains, so the file server can make authorization decisions at file-access time using only local information. The system offers limited privacy for group lists and has all-or-nothing delegation to other administrative domains via nested groups. Experiments demonstrate that the authentication server scales to groups with tens of thousands of members. REX contributes a new architecture for remote execution that offers extensibility and security. To achieve extensibility...
(cont.) mechanism, a user only needs to know a small region of the Internet in order to select a route to reach a destination. In addition, a novel route representation and packet forwarding scheme is designed such that a source and a destination address can uniquely represent a sequence of providers a packet traverses. Network measurement, simulation, and analytic modeling are used in combination to evaluate the design of NIRA. The evaluation suggests that NIRA is scalable.; The present Internet routing system faces two challenging problems. First, unlike in the telephone system, Internet users cannot choose their wide-area Internet service providers (ISPs) separately from their local access providers. With the introduction of new technologies such as broadband residential service and fiber-to-the-home, the local ISP market is often a monopoly or a duopoly. The lack of user choice is likely to reduce competition among wide-area ISPs, limiting the incentives for wide-area ISPs to improve quality of service, reduce price, and offer new services. Second, the present routing system fails to scale effectively in the presence of real-world requirements such as multi-homing for robust and redundant Internet access. A multi-homed site increases the amount of routing state maintained globally by the Internet routing system. As the demand for multi-homing continues to rise...
This paper provides an interdisciplinary perspective concerning the role of
prosumers on future Internet design based on the current trend of Internet user
empowerment. The paper debates the prosumer role, and addresses models to
develop a symmetric Internet architecture and supply-chain based on the
integration of social capital aspects. It has as goal to ignite the discussion
concerning a socially-driven Internet architectural design.
In the current architecture of the Internet, there is a strong asymmetry in
terms of power between the entities that gather and process personal data
(e.g., major Internet companies, telecom operators, cloud providers, ...) and
the individuals from which this personal data is issued. In particular,
individuals have no choice but to blindly trust that these entities will
respect their privacy and protect their personal data. In this position paper,
we address this issue by proposing an utopian crypto-democracy model based on
existing scientific achievements from the field of cryptography. More
precisely, our main objective is to show that cryptographic primitives,
including in particular secure multiparty computation, offer a practical
solution to protect privacy while minimizing the trust assumptions. In the
crypto-democracy envisioned, individuals do not have to trust a single physical
entity with their personal data but rather their data is distributed among
several institutions. Together these institutions form a virtual entity called
the Trustworthy that is responsible for the storage of this data but which can
also compute on it (provided first that all the institutions agree on this).
Finally, we also propose a realistic proof-of-concept of the Trustworthy...
According to the increasing complexity of network application and internet
traffic, network processor as a subset of embedded processors have to process
more computation intensive tasks. By scaling down the feature size and emersion
of chip multiprocessors (CMP) that are usually multi-thread processors, the
performance requirements are somehow guaranteed. As multithread processors are
the heir of uni-thread processors and there isn't any general design flow to
design a multithread embedded processor, in this paper we perform a
comprehensive design space exploration for an optimum uni-thread embedded
processor based on the limited area and power budgets. Finally we run multiple
threads on this architecture to find out the maximum thread level parallelism
(TLP) based on performance per power and area optimum uni-thread architecture.; Comment: International Journal of Embedded Systems and Applications (IJESA),
An effective architecture for the Internet of Things (IoT), particularly for
an emerging nation like India with limited technology penetration at the
national scale, should be based on tangible technology advances in the present,
practical application scenarios of social and entrepreneurial value, and
ubiquitous capabilities that make the realization of IoT affordable and
sustainable. Humans, data, communication and devices play key roles in the IoT
ecosystem that we perceive. In a push towards this sustainable and practical
IoT Architecture for India, we synthesize ten design paradigms to consider.
Caffe provides multimedia scientists and practitioners with a clean and
modifiable framework for state-of-the-art deep learning algorithms and a
collection of reference models. The framework is a BSD-licensed C++ library
with Python and MATLAB bindings for training and deploying general-purpose
convolutional neural networks and other deep models efficiently on commodity
architectures. Caffe fits industry and internet-scale media needs by CUDA GPU
computation, processing over 40 million images a day on a single K40 or Titan
GPU ($\approx$ 2.5 ms per image). By separating model representation from
actual implementation, Caffe allows experimentation and seamless switching
among platforms for ease of development and deployment from prototyping
machines to cloud environments. Caffe is maintained and developed by the
Berkeley Vision and Learning Center (BVLC) with the help of an active community
of contributors on GitHub. It powers ongoing research projects, large-scale
industrial applications, and startup prototypes in vision, speech, and
multimedia.; Comment: Tech report for the Caffe software at http://github.com/BVLC/Caffe/
nformation security is an issue of global concern. As the Internet is
delivering great convenience and benefits to the modern society, the rapidly
increasing connectivity and accessibility to the Internet is also posing a
serious threat to security and privacy, to individuals, organizations, and
nations alike. Finding effective ways to detect, prevent, and respond to
intrusions and hacker attacks of networked computers and information systems.
This paper presents a knowledge discovery frame work to detect DoS attacks at
the boundary controllers (routers). The idea is to use machine learning
approach to discover network features that can depict the state of the network
connection. Using important network data (DoS relevant features), we have
developed kernel machine based and soft computing detection mechanisms that
achieve high detection accuracies. We also present our work of identifying DoS
pertinent features and evaluating the applicability of these features in
detecting novel DoS attacks. Architecture for detecting DoS attacks at the
router is presented. We demonstrate that highly efficient and accurate
signature based classifiers can be constructed by using important network
features and machine learning techniques to detect DoS attacks at the boundary
controllers.; Comment: IEEE Publication format...
In a video on demand system, the main video repository may be far away from
the user and generally has limited streaming capacities. Since a high quality
video's size is huge, it requires high bandwidth for streaming over the
internet. In order to achieve a higher video hit ratio, reduced client waiting
time, distributed server's architecture can be used, in which multiple local
servers are placed close to clients and, based on their regional demands video
contents are cached dynamically from the main server. As the cost of proxy
server is decreasing and demand for reduced waiting time is increasing day by
day, newer architectures are explored, innovative schemes are arrived at. In
this paper we present novel 3 layer architecture, includes main multimedia
server, a Tracker and Proxy servers. This architecture targets to optimize the
client waiting time. We also propose an efficient prefix caching and load
sharing algorithm at the proxy server to allocate the cache according to
regional popularity of the video. The simulation results demonstrate that it
achieves significantly lower client's waiting time, when compared to the other
existing algorithms.; Comment: International Journal of Computer Science Issues, IJCSI, Vol. 7,
This paper examines the ideological and policy consensus that shaped
computing research funded by the Information Processing Techniques Office
(IPTO) within the Department of Defense's Advanced Research Projects Agency
(ARPA). This historical case study of the period between Sputnik and the
creation of the ARPANET shows how military, scientific, and academic values
shaped the institutions and relations of a foundational period in the creation
of the Internet.
The paper probes three areas: the ideology of the science policy consensus,
the institutional philosophy of IPTO under J. C. R. Licklider, and the ways
that this consensus and philosophy shaped IPTO research in the period leading
to the creation of the ARPANET. By examining the intellectual, cultural, and
institutional details of the consensus that governed IPTO research between 1957
and 1969, we can understand the ways that these values defined the range of
possibilities for network computing.
The influence of the social values expressed by these actors was decisive:
that government had an obligation to support a broad base of scientific
research to promote both the public good and the national defense; that
IPTO-sponsored computing research would accomplish both military and scientific
objectives; and that IPTO could leverage its power within this consensus to
create a network to share resources and unite researchers over geographical
distance. A greater awareness of the ways that "consensus" worked in this
period -- the "pre-history" of the Internet -- provides a richer context for
evaluating the unique features of the Internet...
This paper investigated how doctors in remote rural hospitals in South Africa
use computer-mediated tool to communicate with experienced and specialist
doctors for professional advice to improve on their clinical practices. A case
study approach was used. Ten doctors were purposively selected from ten
hospitals in the North West Province. Data was collected using semi-structured
open ended interview questions. The interviewees were asked to tell in their
own words the average number of patients served per week, processes used in
consultation with other doctors, communication practices using
computer-mediated tool, transmission speed of the computer-mediated tool and
satisfaction in using the computer-mediated communication tool. The findings
revealed that an average of 15 consultations per doctor to a specialist doctor
per week was done through face to face or through telephone conversation
instead of using a computer-mediated tool. Participants cited reasons for not
using computer-mediated tool for communication due to slow transmission speed
of the Internet and regular down turn of the Internet connectivity, constant
electricity power outages and lack of e-health application software to support
real time computer-mediated communication. The results led to the
recommendation of a hybrid cloud computing architecture for improving
communication between doctors in hospitals.; Comment: 10
Cybersecurity attacks are a major and increasing burden to economic and
social systems globally. Here we analyze the principles of security in
different domains and demonstrate an architectural flaw in current
cybersecurity. Cybersecurity is inherently weak because it is missing the
ability to defend the overall system instead of individual computers. The
current architecture enables all nodes in the computer network to communicate
transparently with one another, so security would require protecting every
computer in the network from all possible attacks. In contrast, other systems
depend on system-wide protections. In providing conventional security, police
patrol neighborhoods and the military secures borders, rather than defending
each individual household. Likewise, in biology, the immune system provides
security against viruses and bacteria using primarily action at the skin,
membranes, and blood, rather than requiring each cell to defend itself. We
propose applying these same principles to address the cybersecurity challenge.
This will require: (a) Enabling pervasive distribution of self-propagating
securityware and creating a developer community for such securityware, and (b)
Modifying the protocols of internet routers to accommodate adaptive security
software that would regulate internet traffic. The analysis of the immune
system architecture provides many other principles that should be applied to
cybersecurity. Among these principles is a careful interplay of detection and
action that includes evolutionary improvement. However...