Página 2 dos resultados de 3715 itens digitais encontrados em 0.061 segundos

A conceptual architecture with trust consensus to enhance group recommendations

Santos Junior, Edson Benedito dos; Manzato, Marcelo Garcia; Goularte, Rudinei
Fonte: Institute of Electrical and Electronics Engineers - IEEE; International Association for Computer and Information Science - ACIS; Taiyuan Publicador: Institute of Electrical and Electronics Engineers - IEEE; International Association for Computer and Information Science - ACIS; Taiyuan
Tipo: Conferência ou Objeto de Conferência
Português
Relevância na Pesquisa
57.55695%
Recommender Systems have been studied and developed as an indispensable technique of the Information Filtering field. A drawback of traditional user-item systems is that most recommenders ignore connections consistent with the real world recommendations. Furthermore, trust-based approaches ignore the group modeling and do not respect the users’ individualities in a group recommendation set. In this paper, we propose a conceptual architecture which uses the social trust consensus from users to improve the accuracy of the trust-based recommender systems. It is based on an existent model and integrates user’s trust relations and item’s factors into a generic latent fator model. One advantage of our model is the possibility to bias the users’ similarity computation according to a trust consensus that assists in the formation of groups, such as the group of individuals who share the same content. The proposal representes the first steps towards the development of a group recommender system model. We provide an evaluation of our method with the Epinions dataset and compare our approach against other state-of-the-art techniques.; CAPES; CNPq; FAPESP

Designing for long-term human-robot interaction and application to weight loss

Kidd, Cory David, 1977-
Fonte: Massachusetts Institute of Technology Publicador: Massachusetts Institute of Technology
Tipo: Tese de Doutorado Formato: 251 p.
Português
Relevância na Pesquisa
57.47631%
Human-robot interaction is now well enough understood to allow us to build useful systems that can function outside of the laboratory. This thesis defines sociable robot system in the context of long-term interaction, proposes guidelines for creating and evaluating such systems, and describes the implementation of a robot that has been designed to help individuals effect behavior change while dieting. The implemented system is a robotic weight loss coach, which is compared to a standalone computer and to a traditional paper log in a controlled study. A current challenge in weight loss is in getting individuals to keep off weight that is lost. The results of our study show that participants track their calorie consumption and exercise for nearly twice as long when using the robot than with the other methods and develop a closer relationship with the robot. Both of these are indicators of longer-term success at weight loss and maintenance.; by Cory David Kidd.; Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2008.; Includes bibliographical references (p. 241-251).

System-on-a-Chip (SoC) based Hardware Acceleration in Register Transfer Level (RTL) Design

Niu, Xinwei
Fonte: FIU Digital Commons Publicador: FIU Digital Commons
Tipo: Artigo de Revista Científica Formato: application/pdf
Português
Relevância na Pesquisa
57.4259%
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop...

Real-Time Scheduling of Embedded Applications on Multi-Core Platforms

Fan, Ming
Fonte: FIU Digital Commons Publicador: FIU Digital Commons
Tipo: Artigo de Revista Científica Formato: application/pdf
Português
Relevância na Pesquisa
57.56269%
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms...

Techniques for Efficient Execution of Large-Scale Scientific Workflows in Distributed Environments

Kalayci, Selim
Fonte: FIU Digital Commons Publicador: FIU Digital Commons
Tipo: Artigo de Revista Científica Formato: application/pdf
Português
Relevância na Pesquisa
57.54342%
Scientific exploration demands heavy usage of computational resources for large-scale and deep analysis in many different fields. The complexity or the sheer scale of the computational studies can sometimes be encapsulated in the form of a workflow that is made up of numerous dependent components. Due to its decomposable and parallelizable nature, different components of a scientific workflow may be mapped over a distributed resource infrastructure to reduce time to results. However, the resource infrastructure may be heterogeneous, dynamic, and under diverse administrative control. Workflow management tools are utilized to help manage and deal with various aspects in the lifecycle of such complex applications. One particular and fundamental aspect that has to be dealt with as smooth and efficient as possible is the run-time coordination of workflow activities (i.e. workflow orchestration). Our efforts in this study are focused on improving the workflow orchestration process in such dynamic and distributed resource environments. We tackle three main aspects of this process and provide contributions in each of them. Our first contribution involves increasing the scalability and site autonomy in situations where the mapped components of a workflow span across several heterogeneous administrative domains. We devise and implement a generic decentralization framework for orchestration of workflows under such conditions. Our second contribution is involved with addressing the issues that arise due to the dynamic nature of such environments. We provide generic adaptation mechanisms that are highly transparent and also substantially less intrusive with respect to the rest of the workflow in execution. Our third contribution is to improve the efficiency of orchestration of large-scale parameter-sweep workflows. By exploiting their specific characteristics...

Well, you didn't say NOT to! A formal systems engineering approach to teaching an unruly architecture good behavior

Giammarco, Kristin; Auguston, Mikhail
Fonte: Elsevier Publicador: Elsevier
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
57.536094%
http://dx.doi.org/10.1016/j.procs.2013.09.273; This paper proses a formal modeling approach for predicting emergent reactive systems and system of systems (SoS) behaviors resulting from the interactions among subsystems and among the system and its environment. The approach emphasizes specification of component behavior and component interaction as separate concerns at the architectural level, consistent with well-accepted definitions of SoS. The Monterey Phoenix (MP) approach provides features for production of emergent SoS behaviors. An example highlights limitations of current modeling languages and approaches that hinder prediction of emergent behavior, an demonstrates how the application of MP can enhance SoS modeling capability through the following principles: Model component interactions as general rules, orthogonal to the component behavior. Automatically extract possible scenarios (use cases) from descriptions of system behavior. Test system behavior against stakeholder expectations/requirements using scenario inspection and assertion checking. MP provides a new capability for automatically verifying system behaviora early in the lifecycle, when design flaws are most easily and inexpensively corrected. MP extends existing frameworks and allows multiple visualizations for different stakeholders...

Distributed computer-controlled systems: the DEAR-COTS approach

Veríssimo, Paulo; Casimiro, António; Pinho, Luis Miguel; Vasques, Francisco; Rodrigues, Luís; Tovar, Eduardo
Fonte: IPP-Hurray Group Publicador: IPP-Hurray Group
Tipo: Relatório
Publicado em //2000 Português
Relevância na Pesquisa
57.598613%
This paper proposes a new architecture targeting real-time and reliable Distributed Computer-Controlled Systems (DCCS). This architecture provides a structured approach for the integration of soft and/or hard real-time applications with Commercial O -The-Shelf (COTS) components. The Timely Computing Base model is used as the reference model to deal with the heterogeneity of system components with respect to guaranteeing the timeliness of applications. The reliability and availability requirements of hard real-time applications are guaranteed by a software-based fault-tolerance approach.; FCT

A Scalable VLSI Architecture for Soft-Input Soft-Output Depth-First Sphere Decoding

Witte, Ernst Martin; Borlenghi, Filippo; Ascheid, Gerd; Leupers, Rainer; Meyr, Heinrich
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
57.409604%
Multiple-input multiple-output (MIMO) wireless transmission imposes huge challenges on the design of efficient hardware architectures for iterative receivers. A major challenge is soft-input soft-output (SISO) MIMO demapping, often approached by sphere decoding (SD). In this paper, we introduce the - to our best knowledge - first VLSI architecture for SISO SD applying a single tree-search approach. Compared with a soft-output-only base architecture similar to the one proposed by Studer et al. in IEEE J-SAC 2008, the architectural modifications for soft input still allow a one-node-per-cycle execution. For a 4x4 16-QAM system, the area increases by 57% and the operating frequency degrades by 34% only.; Comment: Accepted for IEEE Transactions on Circuits and Systems II Express Briefs, May 2010. This draft from April 2010 will not be updated any more. Please refer to IEEE Xplore for the final version. *) The final publication will appear with the modified title "A Scalable VLSI Architecture for Soft-Input Soft-Output Single Tree-Search Sphere Decoding"

Network QoS Management in Cyber-Physical Systems

Xia, Feng; Ma, Longhua; Dong, Jinxiang; Sun, Youxian
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 19/05/2008 Português
Relevância na Pesquisa
57.39845%
Technical advances in ubiquitous sensing, embedded computing, and wireless communication are leading to a new generation of engineered systems called cyber-physical systems (CPS). CPS promises to transform the way we interact with the physical world just as the Internet transformed how we interact with one another. Before this vision becomes a reality, however, a large number of challenges have to be addressed. Network quality of service (QoS) management in this new realm is among those issues that deserve extensive research efforts. It is envisioned that wireless sensor/actuator networks (WSANs) will play an essential role in CPS. This paper examines the main characteristics of WSANs and the requirements of QoS provisioning in the context of cyber-physical computing. Several research topics and challenges are identified. As a sample solution, a feedback scheduling framework is proposed to tackle some of the identified challenges. A simple example is also presented that illustrates the effectiveness of the proposed solution.; Comment: To appear in The 2008 Int.Conf. on Embedded Software and Systems (ICESS), Chengdu, China, July 2008

Proceedings 11th International Workshop on Quantitative Aspects of Programming Languages and Systems

Bortolussi, Luca; Wiklicky, Herbert
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
57.515845%
Quantitative aspects of computation are important and sometimes essential in characterising the behavior and determining the properties of systems. They are related to the use of physical quantities (storage space, time, bandwidth, etc.) as well as mathematical quantities (e.g. probability and measures for reliability, security and trust). Such quantities play a central role in defining both the model of systems (architecture, language design, semantics) and the methodologies and tools for the analysis and verification of system properties. The aim of this workshop is to discuss the explicit use of quantitative information such as time and probabilities either directly in the model or as a tool for the analysis of systems.

Proceedings 10th Workshop on Quantitative Aspects of Programming Languages and Systems

Wiklicky, Herbert; Massink, Mieke
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 02/07/2012 Português
Relevância na Pesquisa
57.62322%
This volume contains the proceedings of the Tenth Workshop on Quantitative Aspects of Programming Languages (QAPL 2012), held in Tallin, Estonia, on March 31 and April 1, 2012. QAPL 2012 is a satellite event of the European Joint Conferences on Theory and Practice of Software (ETAPS 2012). The workshop theme is on quantitative aspects of computation. These aspects are related to the use of physical quantities (storage space, time, bandwidth, etc.) as well as mathematical quantities (e.g. probability and measures for reliability, security and trust), and play an important (sometimes essential) role in characterising the behavior and determining the properties of systems. Such quantities are central to the definition of both the model of systems (architecture, language design, semantics) and the methodologies and tools for the analysis and verification of the systems properties. The aim of this workshop is to discuss the explicit use of quantitative information such as time and probabilities either directly in the model or as a tool for the analysis of systems.; Comment: EPTCS 85, 2012

An Optimal Controller Architecture for Poset-Causal Systems

Shah, Parikshit; Parrilo, Pablo
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 30/11/2011 Português
Relevância na Pesquisa
57.559453%
We propose a novel and natural architecture for decentralized control that is applicable whenever the underlying system has the structure of a partially ordered set (poset). This controller architecture is based on the concept of Moebius inversion for posets, and enjoys simple and appealing separation properties, since the closed-loop dynamics can be analyzed in terms of decoupled subsystems. The controller structure provides rich and interesting connections between concepts from order theory such as Moebius inversion and control-theoretic concepts such as state prediction, correction, and separability. In addition, using our earlier results on H_2-optimal decentralized control for arbitrary posets, we prove that the H_2-optimal controller in fact possesses the proposed structure, thereby establishing the optimality of the new controller architecture.; Comment: 32 pages, 9 figures, submitted to IEEE Transactions on Automatic Control

Hybrid Communication Architecture HCA

Visala, Kari
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 15/07/2014 Português
Relevância na Pesquisa
57.707407%
The beginning of the 21st century has seen many projects on distributed hash tables, both research and commercial. One of their aims has been to replace the first generation of file sharing software with scalable peer-to-peer architectures. On other fronts, the same techniques are applied, for example, to content delivery networks, streaming networks, cooperative caches, distributed file systems, and grid computing architectures for scientific use. This trend has emerged because with cooperative peers it is possible to asymptotically enhance the use of resouces in sharing of data compared to the basic client-server architecture. The need for distribution of data is wide and one could argue that it is as fundamental a building block as the message passing of the Internet. As an answer to this need a new scalable architecture is introduced: Hybrid Communication Architecture (HCA), which provides both data sharing and message passing as communication primitives for applications. HCA can be regarded as an abstraction layer for communication which is further encapsulated by a higher-level middleware. HCA is aimed at general use, and it is not designed for any particular application. One key idea is to combine data sharing with streaming since together they enable many applications not easily implementable with only one of these features. For example...

Proceedings Twelfth International Workshop on Quantitative Aspects of Programming Languages and Systems

Bertrand, Nathalie; Bortolussi, Luca
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 05/06/2014 Português
Relevância na Pesquisa
57.652197%
This volume contains the proceedings of the Twelfth Workshop on Quantitative Aspects of Programming Languages and Systems (QAPL 2014), held in Grenoble, France, on 12 and 13 April, 2014. QAPL 2014 was a satellite event of the European Joint Conferences on Theory and Practice of Software (ETAPS). The central theme of the workshop is that of quantitative aspects of computation. These aspects are related to the use of physical quantities (storage space, time, bandwidth, etc.) as well as mathematical quantities (e.g. probability and measures for reliability, security and trust), and play an important (sometimes essential) role in characterising the behaviour and determining the properties of systems. Such quantities are central to the definition of both the model of systems (architecture, language design, semantics) and the methodologies and tools for the analysis and verification of the systems properties. The aim of this workshop is to discuss the explicit use of quantitative information such as time and probabilities either directly in the model or as a tool for the analysis of systems.

Denial of service attack in the Internet: agent-based intrusion detection and reaction

Ignatenko, O.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 27/04/2009 Português
Relevância na Pesquisa
57.607734%
This paper deals with denial of service attack. Overview of the existing attacks and methods is proposed. Classification scheme is presented for a different denial of service attacks. There is considered agent-based intrusion detection systems architecture. Considered main components and working principles for a systems of such kind.; Comment: 6 pages, 3 figures

Optimal Radio Resource Allocation for Hybrid Traffic in Cellular Networks: Centralized and Distributed Architecture

Abdelhadi, Ahmed; Ghorbanzadeh, Mo; Clancy, Charles
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 12/11/2014 Português
Relevância na Pesquisa
57.542354%
Optimal resource allocation is of paramount importance in utilizing the scarce radio spectrum efficiently and provisioning quality of service for miscellaneous user applications, generating hybrid data traffic streams in present-day wireless communications systems. A dynamism of the hybrid traffic stemmed from concurrently running mobile applications with temporally varying usage percentages in addition to subscriber priorities impelled from network providers' perspective necessitate resource allocation schemes assigning the spectrum to the applications accordingly and optimally. This manuscript concocts novel centralized and distributed radio resource allocation optimization problems for hybrid traffic-conveying cellular networks communicating users with simultaneously running multiple delay-tolerant and real-time applications modelled as logarithmic and sigmoidal utility functions, volatile application percent usages, and diverse subscriptions. Casting under a utility proportional fairness entail no lost calls for the proposed modi operandi, for which we substantiate the convexity, devise computationally efficient algorithms catering optimal rates to the applications, and prove a mutual mathematical equivalence. Ultimately, the algorithms performance is evaluated via simulations and discussing germane numerical results.

DRAFT : Task System and Item Architecture (TSIA)

Burow, Burkhard D.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 04/05/1999 Português
Relevância na Pesquisa
57.68688%
During its execution, a task is independent of all other tasks. For an application which executes in terms of tasks, the application definition can be free of the details of the execution. Many projects have demonstrated that a task system (TS) can provide such an application with a parallel, distributed, heterogeneous, adaptive, dynamic, real-time, interactive, reliable, secure or other execution. A task consists of items and thus the application is defined in terms of items. An item architecture (IA) can support arrays, routines and other structures of items, thus allowing for a structured application definition. Taking properties from many projects, the support can extend through to currying, application defined types, conditional items, streams and other definition elements. A task system and item architecture (TSIA) thus promises unprecedented levels of support for application execution and definition.; Comment: vii+244 pages, including 126 figures of diagrams and code examples. Submitted to Springer Verlag. For further information see http://www.tsia.org

The Distributed Network Processor: a novel off-chip and on-chip interconnection network architecture

Biagioni, Andrea; Cicero, Francesca Lo; Lonardo, Alessandro; Paolucci, Pier Stanislao; Perra, Mersia; Rossetti, Davide; Sidore, Carlo; Simula, Francesco; Tosoratto, Laura; Vicini, Piero
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 07/03/2012 Português
Relevância na Pesquisa
57.5822%
One of the most demanding challenges for the designers of parallel computing architectures is to deliver an efficient network infrastructure providing low latency, high bandwidth communications while preserving scalability. Besides off-chip communications between processors, recent multi-tile (i.e. multi-core) architectures face the challenge for an efficient on-chip interconnection network between processor's tiles. In this paper, we present a configurable and scalable architecture, based on our Distributed Network Processor (DNP) IP Library, targeting systems ranging from single MPSoCs to massive HPC platforms. The DNP provides inter-tile services for both on-chip and off-chip communications with a uniform RDMA style API, over a multi-dimensional direct network with a (possibly) hybrid topology.; Comment: 8 pages, 11 figures, submitted to Hot Interconnect 2009

A Federated CloudNet Architecture: The PIP and the VNP Role

Abarca, Ernesto; Grassler, Johannes; Schaffrath, Gregor; Schmid, Stefan
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 27/03/2013 Português
Relevância na Pesquisa
57.667344%
We present a generic and flexible architecture to realize CloudNets: virtual networks connecting cloud resources with resource guarantees. Our architecture is federated and supports different (and maybe even competing) economical roles, by providing explicit negotiation and provisioning interfaces. Contract-based interactions and a resource description language that allows for aggregation and abstraction, preserve the different roles' autonomy without sacrificing flexibility. Moreover, since our CloudNet architecture is plugin based, essentially all cloud operating systems (e.g., OpenStack) or link technologies (e.g., VLANs, OpenFlow, VPLS) can be used within the framework. This paper describes two roles in more detail: The Physical Infrastructure Providers (PIP) which own the substrate network and resources, and the Virtual Network Providers (VNP) which can act as resource and CloudNet brokers and resellers. Both roles are fully implemented in our wide-area prototype that spans remote sites and resources.

Disaggregated and optically interconnected memory: when will it be cost effective?

Abali, Bulent; Eickemeyer, Richard J.; Franke, Hubertus; Li, Chung-Sheng; Taubenblatt, Marc A.
Fonte: Universidade Cornell Publicador: Universidade Cornell
Tipo: Artigo de Revista Científica
Publicado em 03/03/2015 Português
Relevância na Pesquisa
57.535786%
The "Disaggregated Server" concept has been proposed for datacenters where the same type server resources are aggregated in their respective pools, for example a compute pool, memory pool, network pool, and a storage pool. Each server is constructed dynamically by allocating the right amount of resources from these pools according to the workload's requirements. Modularity, higher packaging and cooling efficiencies, and higher resource utilization are among the suggested benefits. With the emergence of very large datacenters, "clouds" containing tens of thousands of servers, datacenter efficiency has become an important topic. Few computer chip and systems vendors are working on and making frequent announcements on silicon photonics and disaggregated memory systems. In this paper we study the trade-off between cost and performance of building a disaggregated memory system where DRAM modules in the datacenter are pooled, for example in memory-only chassis and racks. The compute pool and the memory pool are interconnected by an optical interconnect to overcome the distance and bandwidth issues of electrical fabrics. We construct a simple cost model that includes the cost of latency, cost of bandwidth and the savings expected from a disaggregated memory system. We then identify the level at which a disaggregated memory system becomes cost competitive with a traditional direct attached memory system. Our analysis shows that a rack-scale disaggregated memory system will have a non-trivial performance penalty...