Página 1 dos resultados de 1002 itens digitais encontrados em 0.010 segundos

Sensor Fusion with Low-Grade Inertial Sensors and Odometer to Estimate Geodetic Coordinates in Environments without GPS Signal

Santana, Douglas Daniel Sampaio; Furukawa, Celso Massatoshi
Fonte: Biblioteca Digital da Produção Intelectual da USP Publicador: Biblioteca Digital da Produção Intelectual da USP
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
690.0565%
This paper presents a sensor fusion algorithm based on a Kalman Filter to estimate geodetic coordinates and reconstruct a car test trajectory in environments where there is no GPS signal. The sensor fusion algorithm is based on low-grade strapdown inertial sensors (i.e. accelerometers and gyroscopes) and an incremental odometer that provides velocity measurements. Since the dynamic system is non linear, an Extended Kalman Filter (EKF) is used to estimate the states (i.e. latitude, longitude and altitude) and reconstruct the test trajectory. The proposed algorithm has potential to be applied on situations where GPS signals are not available, such as in pipeline inspection and underwater or underground environments. The proposed inertial navigation system was developed and tested; it has shown that a closed tested trajectory can not be reconstructed satisfactorily using only inertial sensors measurements. However when the proposed sensor fusion algorithm is used, the trajectory can be reconstructed with relative success. On preliminary experiments, it was possible to reconstruct a closed trajectory of approximately 2800m, attaining a final error of approximately 13m.

Combinação de métodos de inteligência artificial para fusão de sensores; Combination of artificial intelligence methods for sensor fusion

Faceli, Katti
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 23/03/2001 Português
Relevância na Pesquisa
692.8812%
Robôs móveis dependem de dados provenientes de sensores para ter uma representação do seu ambiente. Porém, os sensores geralmente fornecem informações incompletas, inconsistentes ou imprecisas. Técnicas de fusão de sensores têm sido empregadas com sucesso para aumentar a precisão de medidas obtidas com sensores. Este trabalho propõe e investiga o uso de técnicas de inteligência artificial para fusão de sensores com o objetivo de melhorar a precisão e acurácia de medidas de distância entre um robô e um objeto no seu ambiente de trabalho, obtidas com diferentes sensores. Vários algoritmos de aprendizado de máquina são investigados para fundir os dados dos sensores. O melhor modelo gerado com cada algoritmo é chamado de estimador. Neste trabalho, é mostrado que a utilização de estimadores pode melhorar significativamente a performance alcançada por cada sensor isoladamente. Mas os vários algoritmos de aprendizado de máquina empregados têm diferentes características, fazendo com que os estimadores tenham diferentes comportamentos em diferentes situações. Objetivando atingir um comportamento mais preciso e confiável, os estimadores são combinados em comitês. Os resultados obtidos sugerem que essa combinação pode melhorar a confiança e precisão das medidas de distâncias dos sensores individuais e estimadores usados para fusão de sensores.; Mobile robots rely on sensor data to have a representation of their environment. However...

Monitoramento de operações de retificação usando fusão de sensores; Monitoring of operations the rectification using sensors of fusion

Schühli, Luciano Alcindo
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 02/08/2007 Português
Relevância na Pesquisa
586.5673%
O presente trabalho trata da análise experimental de um sistema de monitoramento baseado na técnica de fusão de sensores, aplicado em uma retificadora cilíndrica externa. A fusão é realizada entre os sinais de potência e emissão acústica para obtenção do parâmetro FAP (Fast Abrasive Power) através do método desenvolvido por Valente (2003). Através da simulação de problemas encontrados nos processos de retificação (falha de sobremetal, colisão, desbalanceamento e vibração), foram captados os sinais de potência e emissão acústica e a partir destes gerado o parâmetro FAP, comparando seu desempenho, na detecção dos problemas, com os outros dois sinais. Para a análise foram construídos os gráficos das variações dos sinais em relação ao tempo de execução do processo e os mapas do FAP e acústico. O sistema de monitoramento avaliado tem como característica baixa complexidade de instalação e execução. Os dados experimentais revelam que o FAP apresenta uma velocidade de resposta maior que a potência e levemente amortecida em relação à emissão acústica. O nível do seu sinal é igual ao da potência mantendo-se homogêneo durante o processo, ao contrário da emissão acústica que pode ser influenciada por diversos outros parâmetros...

Navegação terrestre usando unidade de medição inercial de baixo desempenho e fusão sensorial com filtro de Kalman adaptativo suavizado.; Terrestrial navigation using low-grade inertial measurement unit and sensor fusion with smoothed adaptive Kalman filter.

Santana, Douglas Daniel Sampaio
Fonte: Biblioteca Digitais de Teses e Dissertações da USP Publicador: Biblioteca Digitais de Teses e Dissertações da USP
Tipo: Tese de Doutorado Formato: application/pdf
Publicado em 01/06/2011 Português
Relevância na Pesquisa
691.45766%
Apresenta-se o desenvolvimento de modelos matemáticos e algoritmos de fusão sensorial para navegação terrestre usando uma unidade de medição inercial (UMI) de baixo desempenho e o Filtro Estendido de Kalman. Os modelos foram desenvolvidos com base nos sistemas de navegação inercial strapdown (SNIS). O termo baixo desempenho refere-se à UMIs que por si só não são capazes de efetuar o auto- alinhamento por girocompassing. A incapacidade de se navegar utilizando apenas uma UMI de baixo desempenho motiva a investigação de técnicas que permitam aumentar o grau de precisão do SNIS com a utilização de sensores adicionais. Esta tese descreve o desenvolvimento do modelo completo de uma fusão sensorial para a navegação inercial de um veículo terrestre usando uma UMI de baixo desempenho, um hodômetro e uma bússola eletrônica. Marcas topográficas (landmarks) foram instaladas ao longo da trajetória de teste para se medir o erro da estimativa de posição nesses pontos. Apresenta-se o desenvolvimento do Filtro de Kalman Adaptativo Suavizado (FKAS), que estima conjuntamente os estados e o erro dos estados estimados do sistema de fusão sensorial. Descreve-se um critério quantitativo que emprega as incertezas de posição estimadas pelo FKAS para se determinar a priori...

Aplicação de tecnicas de fusão de sensores no monitoramento de ambientes; Application of sensor fusion techniques in the environmental monitory

Rogerio Esteves Salustiano
Fonte: Biblioteca Digital da Unicamp Publicador: Biblioteca Digital da Unicamp
Tipo: Dissertação de Mestrado Formato: application/pdf
Publicado em 16/01/2006 Português
Relevância na Pesquisa
681.2563%
Este trabalho propõe um sistema computacional no qual são aplicadas técnicas de Fusão de Sensores no monitoramento de ambientes. O sistema proposto permite a utilização e incorporação de diversos tipos de dados, incluindo imagens, sons e números em diferentes bases. Dentre os diversos algoritmos pertinentes a um sistema como este, foram implementados os de Sensores em Consenso que visam a combinação de dados de uma mesma natureza. O sistema proposto é suficientemente flexível, permitindo a inclusão de novos tipos de dados e os correspondentes algoritmos que os processem. Todo o processo de recebimento dos dados produzidos pelos sensores, configuração e visualização dos resultados é realizado através da Internet; This work proposes a computer system in which Sensor Fusion techniques are applied to monitoring the environment. The proposed system allows the use and incorporation of different data types, including images, sounds and numbers in different bases. Among the existing algorithms that pertain to a system like this, those, which aim to combine data of the same nature, called Consensus Sensors, have been particularly implemented. The proposed system is flexible enough and allows the inclusion of new data types and their corresponding algorithms. The whole process of receiving the data produced by the sensors...

Surface profile based on sensor fusion

Santos, Cristina; Fonseca, Jaime C.; Garrido, Paulo; Couto, Carlos
Fonte: Pergamon Publicador: Pergamon
Tipo: Conferência ou Objeto de Conferência
Publicado em 09/09/1998 Português
Relevância na Pesquisa
581.55848%
This paper reports a sensor system which it has been designed and constructed to acquire the profile of surfaces. This system is based in a CCD camera for object boundary-determination mounted over a robot manipulator shoulder and ultrasonic sensors for depth measurement mounted on a fixture at the wrist of the manipulator. It has been used an average weighted by degrees of confidence for raw sensor data fusion, based on a heuristic set of rules.

Acquisition the profile of surfaces with complementary sensor fusion techniques

Fonseca, Jaime C.; Martins, Júlio S.; Couto, Carlos
Fonte: DAAAM International Publicador: DAAAM International
Tipo: Parte de Livro
Publicado em //2005 Português
Relevância na Pesquisa
688.5074%
This paper presents complementary sensor fusion techniques for the acquisition of the profile of surfaces with minimum error using low cost sensors ultrasonic sensors. These surfaces are composed by areas with different depths, corners and specular surfaces. To minimize the constraints of sonar sensors, it was developed dedicated software and hardware, as well as an empirical model was obtained from real data. This model is based in two proposed concepts: Points of Constant Depth (PCD) and Areas of Constant Depth (ACD). Having this sonar model in mind, four sensor fusion techniques are used separately to validate the PCDs and decide the ACDs: average and variance, fuzzy controller and heuristic method based in rules. In this work a PUMA 560 manipulator was equipped with a CCD video camera on the shoulder and four ultrasonic sensors on the wrist, to acquire data to model the geometry of the part’s surface, exploiting the mobility of the robot. The CCD camera view defines the working area, while the ultrasonic sensors enable the acquisition of the surface profile. For the acquisition of the profile of surfaces with a minimum error different and complementary sensor fusion techniques are implemented and applied separately, namely the average and variance...

Localization system for pedestrians based on sensor and information fusion

Anacleto, Ricardo; Figueiredo, Lino; Almeida, Ana; Novais, Paulo
Fonte: Institute of Electrical and Electronics Engineers (IEEE) Publicador: Institute of Electrical and Electronics Engineers (IEEE)
Tipo: Conferência ou Objeto de Conferência
Publicado em //2014 Português
Relevância na Pesquisa
583.1964%
Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device’s GPS chip. This means that in indoor or in more dense environments these applications don’t work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.; This work is funded by National Funds through the FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project PEst- OE/EEI/UI0752/2014. The work of Ricardo Anacleto is supported by a doctoral grant by FCT SFRH/BD/70248/2010.

Sensor fusion of laser and vision in active pedestrian detection; Deteção ativa de peões por fusão sensorial de laser e visão

Azevedo, Rui Filipe Cabral de
Fonte: Universidade de Aveiro Publicador: Universidade de Aveiro
Tipo: Dissertação de Mestrado
Português
Relevância na Pesquisa
688.6824%
This work explores a technique of sensor fusion that aims to equip vehicles with pedestrian fast detection mechanisms in exterior environments. This method restricts image areas of search based on indicators obtained by another sensor (LIDAR). This technique is based on the idea that when having a registration among the involved sensors, one "fast" sensor, but inaccurate, that can indicate regions where potential pedestrian are located on the image, and another sensor, "slower" but more robust that is used to confirm detection more accurately. So, an algorithm was created to merge two algorithms, a LIDAR-based tracking and a vision-based detection algorithm; The LIDAR indicates the precise location and scale of the potential pedestrian on the image, and crop the image relative to the potential pedestrian, being processed afterwards by one pedestrian detection algorithm to validate the classification. The method is tested in two different cases and the results confirm their validity.; Este trabalho explora uma técnica de fusão sensorial que visa dotar veículos de mecanismos rápidos de detecção de peões em ambiente exterior. O método restringe as zonas de procura numa imagem com base em indicadores obtidos por outro sensor (LIDAR). Esta técnica tem como base a idéia de que havendo um registo entre os sensores envolvidos...

Dealing with the effects of sensor displacement in wearable activity recognition

Ba??os Legr??n, Oresti; Attila Toth, Mate; Damas Hermoso, Miguel; Pomares Cintas, H??ctor; Rojas Ruiz, Ignacio
Fonte: MDPI Publicador: MDPI
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
584.96387%
Most wearable activity recognition systems assume a predefined sensor deployment that remains unchanged during runtime. However, this assumption does not reflect real-life conditions. During the normal use of such systems, users may place the sensors in a position different from the predefined sensor placement. Also, sensors may move from their original location to a different one, due to a loose attachment. Activity recognition systems trained on activity patterns characteristic of a given sensor deployment may likely fail due to sensor displacements. In this work, we innovatively explore the effects of sensor displacement induced by both the intentional misplacement of sensors and self-placement by the user. The effects of sensor displacement are analyzed for standard activity recognition techniques, as well as for an alternate robust sensor fusion method proposed in a previous work. While classical recognition models show little tolerance to sensor displacement, the proposed method is proven to have notable capabilities to assimilate the changes introduced in the sensor position due to self-placement and provides considerable improvements for large misplacements.

Context-Aided Sensor Fusion for Enhanced Urban Navigation

Martí, Enrique David; Martín, David; García, Jesús; Escalera, Arturo de la; Molina, José M.; Armingol, José M.
Fonte: MDPI Publicador: MDPI
Tipo: info:eu-repo/semantics/publishedVersion; info:eu-repo/semantics/article Formato: application/pdf
Publicado em /12/2012 Português
Relevância na Pesquisa
581.2563%
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments.; This work was supported in part by Projects CICYT TIN2011-28620-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485), CICYT TRA2010-20255-C03-01...

Context-Awareness at the Service of Sensor Fusion Systems: Inverting the Usual Scheme

Martí, Enrique David; García, Jesús; Molina, José M.
Fonte: Springer Publicador: Springer
Tipo: info:eu-repo/semantics/acceptedVersion; info:eu-repo/semantics/conferenceObject; info:eu-repo/semantics/bookPart
Publicado em //2011 Português
Relevância na Pesquisa
689.9087%
Many works on context-aware systems make use of location, navigation or tracking services offered by an underlying sensor fusion module, as part of the relevant contextual information. The obtained knowledge is typically consumed only by the high level layers of the system, in spite that context itself represents a valuable source of information from which every part of the implemented system could take benefit. This paper closes the loop, analyzing how can context knowledge be applied to improve the accuracy, robustness and adaptability of sensor fusion processes. The whole theoretical analysis will be related with the indoor/outdoor navigation system implemented for a wheeled robotic platform. Some preliminary results are presented, where the context information provided by a map is integrated in the sensor fusion system.; This work was supported in part by Projects ATLANTIDA, CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 DPS2008-07029-C02-02.; Proceedings of: 11th International Work-Conference on Artificial Neural Networks (IWANN 2011). International Workshop of Intelligent systems for context-based information fusion (ISCIF 11). Torremolinos-Málaga, Spain, June 8-10, 2011

Multi-camera and Multi-modal Sensor Fusion, an Architecture Overview

Luis Bustamante, Álvaro; Molina, José M.; Patricio Guisado, Miguel Ángel
Fonte: Springer Publicador: Springer
Tipo: info:eu-repo/semantics/acceptedVersion; info:eu-repo/semantics/conferenceObject; info:eu-repo/semantics/bookPart
Publicado em //2010 Português
Relevância na Pesquisa
678.3923%
This paper outlines an architecture formulti-camera andmulti-modal sensor fusion.We define a high-level architecture in which image sensors like standard color, thermal, and time of flight cameras can be fused with high accuracy location systems based on UWB, Wifi, Bluetooth or RFID technologies. This architecture is specially well-suited for indoor environments, where such heterogeneous sensors usually coexists. The main advantage of such a system is that a combined nonredundant output is provided for all the detected targets. The fused output includes in its simplest form the location of each target, including additional features depending of the sensors involved in the target detection, e.g., location plus thermal information. This way, a surveillance or context-aware system obtains more accurate and complete information than only using one kind of technology; This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM CONTEXTS S2009/TIC-1485 and DPS2008-07029-C02-02; Proceedings of: Forth International Workshop on User-Centric Technologies and applications (CONTEXTS 2010). Valencia, 07-10 September , 2010.

A framework for context-aware sensor fusion

Martí Muñoz, Enrique David
Fonte: Universidade Carlos III de Madrid Publicador: Universidade Carlos III de Madrid
Tipo: Tese de Doutorado
Português
Relevância na Pesquisa
701.2637%
Sensor fusion is a mature but very active research field, included in the more general discipline of information fusion. It studies how to combine data coming from different sensors, in such way that the resulting information is better in some sense –more complete, accurate or stable– than any of the original sources used individually. Context is defined as everything that constraints or affects the process of solving a problem, without being part of the problem or the solution itself. Over the last years, the scientific community has shown a remarkable interest in the potential of exploiting this context information for building smarter systems that can make a better use of the available information. Traditional sensor fusion systems are based in fixed processing schemes over a predefined set of sensors, where both the employed algorithms and domain are assumed to remain unchanged over time. Nowadays, affordable mobile and embedded systems have a high sensory, computational and communication capabilities, making them a perfect base for building sensor fusion applications. This fact represents an opportunity to explore fusion system that are bigger and more complex, but pose the challenge of offering optimal performance under changing and unexpected circumstances. This thesis proposes a framework supporting the creation of sensor fusion systems with self-adaptive capabilities...

Robust sensor fusion in real maritime surveillance scenarios

García, Jesús; Guerrero Madrid, José Luis; Luis Bustamante, Álvaro; Molina, José M.
Fonte: IEEE. The Institute Of Electrical And Electronics Engineers, Inc Publicador: IEEE. The Institute Of Electrical And Electronics Engineers, Inc
Tipo: info:eu-repo/semantics/conferenceObject; info:eu-repo/semantics/article; info:eu-repo/semantics/bookPart Formato: application/pdf
Publicado em //2010 Português
Relevância na Pesquisa
681.2563%
This paper presents the design and evaluation of a sensor fusion system for maritime surveillance. The system must exploit the complementary AIS-radar sensing technologies to synthesize a reliable surveillance picture using a highly efficient implementation to operate in dense scenarios. The paper highlights the realistic effects taken into account for robust data combination and system scalability.; This work was supported in part by a national project with NUCLEO CC, and research projects CICYT TEC2008-06732-C02-02/TEC, CICYT TIN2008-06742-C02-02/TSI, SINPROB, CAM CONTEXTS S2009/TIC-1485 and DPS2008-07029-C02-02.; 8 pages, 14 figures.-- Proceedings of: 13th International Conference on Information Fusion (FUSION'2010), Edinburgh, Scotland, UK, Jul 26-29, 2010).

On the Use of Sensor Fusion to Reduce the Impact of Rotational and Additive Noise in Human Activity Recognition

Ba??os Legr??n, Oresti; Damas Hermoso, Miguel; Pomares Cintas, H??ctor; Rojas Ruiz, Ignacio
Fonte: MDPI Publicador: MDPI
Tipo: Artigo de Revista Científica
Português
Relevância na Pesquisa
591.4577%
This article belongs to the Special Issue Select papers from UCAmI 2011 - the 5th International Symposium on Ubiquitous Computing and Ambient Intelligence (UCAmI'11).; The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered.

Improvement of Speckle-Tracked Freehand 3-D Ultrasound Through the Use of Sensor Fusion

Lang, Andrew
Fonte: Quens University Publicador: Quens University
Tipo: Tese de Doutorado Formato: 3438152 bytes; application/pdf
Português
Relevância na Pesquisa
687.7638%
Freehand 3-D ultrasound (US) using a 2-D US probe has the advantage over conventional 3-D probes of being able to collect arbitrary 3-D volumes at a lower cost. Traditionally, generating a volume requires external tracking to record the US probe position. An alternative means of tracking the US probe position is through speckle tracking. Ultrasound imaging has the advantage that the speckle inherent in all images contains relative position information due to the decorrelation of speckle over distance. However, tracking the position of US images using speckle information alone suffers from drifts caused by tissue inconsistencies and overall lack of accuracy. This thesis presents two novel methods of improving the accuracy of speckle-tracked 3-D US through the use of sensor fusion. The first method fuses the speckle-tracked US positions with those measured by an electromagnetic (EM) tracker. Measurements are combined using an unscented Kalman filter (UKF). The fusion is able to reduce drift errors as well as to eliminate high-frequency jitter noise from the EM tracker positions. Such fusion produces a smooth and accurate 3-D reconstruction superior to those using the EM tracker alone. The second method involves the registration of speckle-tracked 3-D US volumes to preoperative CT volumes. We regard registration combined with speckle tracking as a form of sensor fusion. In this case...

Sensor fusion for interactive real-scale modeling and simulation systems

MIRZAEI, Mohammad Ali; CHARDONNET, Jean-Rémy; PERE, Christian; MERIENNE, Frédéric
Fonte: IEEE Publicador: IEEE
Português
Relevância na Pesquisa
688.5074%
This paper proposes an accurate sensor fusion scheme for navigation inside a real-scale 3D model by combining audio and video signals. Audio signal of a microphone-array is merged by Minimum Variance Distortion-less Response (MVDR) algorithm and processed instantaneously via Hidden Markov Model (HMM) to generate translation commands by word-to-action module of speech processing system. Then, the output of optical head tracker (four IR cameras) is analyzed by non-linear/non-Gaussian Bayesian algorithm to provide information about the orientation of the user's head. The orientation is used to redirect the user toward a new direction by applying quaternion rotation. The output of these two sensors (video and audio) is combined under the sensor fusion scheme to perform continuous travelling inside the model. The maximum precision for the traveling task is achieved under sensor fusion scheme. Practical experiment shows promising results for the implementation.

Localization system for pedestrians based on sensor and information fusion

Anacleto, Ricardo; Figueiredo, Lino; Almeida, Ana; Novais, Paulo
Fonte: IEEE Publicador: IEEE
Tipo: Conferência ou Objeto de Conferência
Publicado em //2014 Português
Relevância na Pesquisa
583.1964%
Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.

Self-localization in ubiquitous computing using sensor fusion

Zampieron, Jeffrey
Fonte: Rochester Instituto de Tecnologia Publicador: Rochester Instituto de Tecnologia
Tipo: Tese de Doutorado Formato: 3484055 bytes; application/pdf
Português
Relevância na Pesquisa
589.18844%
The widespread availability of small and inexpensive mobile computing devices and the desire to connect them at any time in any place has driven the need to develop an accurate means of self-localization. Devices that typically operate outdoors use GPS for localization. However, most mobile computing devices operate not only outdoors but indoors where GPS is typically unavailable. Therefore, other localization techniques must be used. Currently, there are several commercially available indoor localization systems. However, most of these systems rely on specialized hardware which must be installed in the mobile device as well as the building of operation. The deployment of this additional infrastructure may be unfeasible or costly. This work addresses the problem of indoor self-localization of mobile devices without the use of specialized infrastructure. We aim to leverage existing assets rather than deploy new infrastructure. The problem of self-localization utilizing single and dual sensor systems has been well studied. Typically, dual sensor systems are used when the limitations of a single sensor prevent it from functioning with the required level of performance and accuracy. A second sensor is often used to complement and improve the measurements of the first one. Sometimes it is better to use more than two sensors. In this work the use of three sensors with complementary characteristics was explored. The three sensor system that was developed included a positional sensor...