## 1.INTRODUCTION

The current challenge in health care system is to estimate physical parameters associated to the patient (Mann, 2004) because of increasingly aging population combined with nursing staff shortages and decreasingly the capacity of hospital (Ko *et al*., 2010). Wireless patient tracking/ monitoring provides flexible and powerful patient surveillance through wearable devices anytime and anywhere by Ren *et al*. (2010).

The core of health care system is an automatic supervision of patients within the nursing institute based on the lightweight system via Wireless Sensor Network (WSN). Patient’s supervision includes two major services: tracking the location of patients and monitoring the patient’s status. In the first service, patients need urgent staff intervention, the exact knowledge of their location plays an important factor to reach them in a short period of time. On the other hand, the last service, the status of patients in critical situation must be continuously available to the medical staff, when patients can roam around the hosting institution’s premises. Many kinds of information on the patient’s status are continually collected and automatically detected such as moving characteristics, heartbeat, etc.

Several researchers such as Redondi *et al*. (2010), Lim *et al*. (2010) and Evennou and Marx (2006) intro- duced the LAURA (LocAlization and Ubiquitous monitoRing of pAtients) system, with Personal Monitoring System (PMS) and Personal Localization and Tracking System (PLTS) to monitor/track the wireless patient localization based on Received Signal Strength (RSS). In case of moving targets, Lim *et al*. (2010) introduced gradient descent to determine the wireless patient position when the distances from the patient to the anchor nodes computed via Signal-to-Distance Mapping (SDM).

Most research in WSN had emphasized the use of PFs for solving many problems. Nguyen *et al*. (2013) derived a new method for evaluation of the number of particles in WSN. Li *et al*. (2013, 2015) introduce KLDresampling for solving problems in WSN under the benchmark model of maneuvering in a two-dimensional plane. Ly-Tu *et al*. (2014) introduced the ability of KLDresampling PF to evaluate the number of particles used for non-linear problems.

Recently, by incorporating these PF methods and gradient descent to avoid wall crossing under known environment’s map for improving wireless patient tracking via the help of WSN. Redondi *et al*. (2010) and Evennou *et al*. (2006) introduced PF based on Sampling Importance Resampling (SIR) to enhance the quality of tracking. While Ly-Tu *et al*. (2015) introduced the KLD-resampling PF for wireless patient tracking by the given upper bound error with fixed probability so as to create a sample set near the high-likelihood region. By setting up a specific upper bound error for the resampling parameter, we found that this technique further reduces the location error of patient but maintains the proper KLD-resampling (Li *et al*., 2015) for health care systems because the variation of RSS measurement values is the diminished. The upper bound error of the resampling parameter (Ly-Tu *et al*., 2015) is only a special case of Remark in Section 3.

The main contributions of this paper are summarized as follows. First, we extend the underlying system (Ly-Tu *et al*., 2015) by proposing an algorithm to obtain the upper bound error of resampling parameter based on the maximum gap error between the proposed algorithm and SIR method for various power levels and density nodes. Second, given the strong contribution of the selected upper bound values of resampling parameter and weight function method, we use them in the proposal for assessing the impact of power levels of self RSSI vs. different density nodes to track wireless patient location.

The paper is organized as follows. Introduction to a problem solving framework such as the patient localization based on gradient descent method, the improved localization via PF, and error criteria are given in Section 2. All related schemes, namely, SIR and our method are presented in Section 3. All experimental results based on MATLAB for wireless patient tracking are shown in Section 4. Finally, we conclude the paper in Section 5.

## 2.BACKGROUND

As has been said, we focus on the wireless patient tracking in health care system. Lim *et al*. (2010) discussed about the problem context and its location data requirements are given in the first section. The wireless patient localization by gradient descent method is shown in the next section. Final, the last section describes PFs serving to enhance the wireless patient tracking by Evennou and Marx (2006) and Redondi *et al*. (2010).

### 2.1.Problem

Evennou *et al*. (2006) introduced a typical health care system, as shown in Figure 1, to assemble the required data from the patients and distribute the same data via a central controller. This system contains the personal monitoring system (PMS), the personal localization and tracking system and a lightweight and flexible network architecture (NA). The PMS is composed of small-wearable devices geared with sensing and wireless transmission capabilities to monitor patient’s status and transmit the comprehensible information to the central controller via the help of the NA.

The accuracy of localization calculates on the site of the anchor nodes and on their density, for instance, an average localization error of just a few meters is evaluated via real experiments.

Regardless of the fundamental network infrastructure, localization is performed adopting analogous processing methods. Under fixed and known the placement of the anchor nodes, RSS measurements are gathered between pairs of nodes which related to the internode distances. Indeed, a RSS indicator (RSSI), which addressed by non-parametric or parametric methods, is supplied by off-the-shelf devices without the demand of dedicated hardware.

The first method, namely non-parametric method or fingerprinting which is introduced by Lim *et al*. (2010) and Evennou and Marx (2006), is described to avoid the explicit modeling of the RSSI-distance relationship. The collection of fingerprint databases, which is implemented during a training phase, has many fingerprint vectors. Each vector consists of the RSSI values measured between a client node at a specific site and the anchor node. A client node forms a vector of RSSI measurements between itself and each anchor node and it compares this vector with the pre-computed fingerprints in the database. Final, its position is estimated based on the placement of the most similar fingerprints. This method suffers from a non-negligible setup cost which is related to the construction of the fingerprint database. Furthermore, it is not robust in face of changes of the environment due to invalid the affection of RF propagation and pre-computed fingerprints.

The second method, which is called parametric method, explicitly assumes the propagation model’s knowledge that involves the RSSI metrics **S** (expressed in dBm units) and the inter-node distance **d**. Patwari *et al*. (2005) introduced an effortless dependency as the following model

where **S**_{0} is the RSSI metrics measured between two nodes, **d**_{0} meters apart, and *α* indicates the factor in path loss for indoor environments. The effects of shadowfading in complex multipath environments is represented by the noise term *v* with a Gaussian random variable $N\left(0,\text{\hspace{0.17em}}{\sigma}_{\upsilon}^{2}\right)$, whereas the value of standard deviation *σ _{v}* depends on the characteristics of the specific environment. Barsocchi

*et al*. (2009) introduced this method to solve the localization problem by computing the position that maximizes the likelihood with respect to the model in (1). The extension of this model is also introduced by Barsocchi

*et al*. (2009) to take into account the signal attenuation because of walls. The advantage of this approach is to restrict the system-setup cost by estimating based on RSSI measurements between anchor nodes. Meanwhile, Lim

*et al*. (2010) also introduced a similar approach via applied the propagation model in (1) under implicit assumptions, but the distance between a client and anchor node is modeled as a linear combination of the RSSI measurements between the client and all the nodes. The construction of these RSSI measurements based on this way is called the RSSI-to-distance linear mapping. The disadvantage of this approach is to setup a zero-configuration.

Because the noise influents RSSI measurements, temporal averaging is used over a time window. The length of the window enables to define a trade-off between the RSSI measurement’s accuracy and the system delay. As a rule of thumb, every 200ms each node is collected and 3-5 RSSI measurements are updated.

In case of the moving patient, the position of the wireless patient is tracked over time so as to improve the quality of localization. To this end, Evennou *et al*. (2006) applied a simply PF method which enables to merge the RSSI measurements together with the typical movement patterns of humans in indoor environments. This way can avoid wall crossing because of incorporating this a-priori information and PF method.

### 2.2.Indoor Localization Via Gradient Descent

Let us denote *x*_{i} ∈ R^{2}, *i*=1, …, *M* be the position of an anchor node, while M indicates M anchor nodes. Each anchor node obtains a vector ${\text{s}}_{\text{i}}\text{=}{\left[{\text{s}}_{\text{i1}},\text{\hspace{0.17em}}{\text{s}}_{\text{i2}},\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}{\text{s}}_{\text{iM}}\text{\hspace{0.17em}}\right]}^{\text{T}}$ of RSSI measurements exchanging data with the other nodes, where s_{ij} is the RSSI relative to the signal emitted by the *j*-th anchor node. Then, let us define $\text{S\hspace{0.17em}}=\text{\hspace{0.17em}}\left[{\text{s}}_{\text{1}},\text{\hspace{0.17em}}{\text{s}}_{\text{2}},\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}{\text{s}}_{\text{M}}\text{\hspace{0.17em}}\right]\in {\text{R}}^{\text{MxM}}$ be the overall RSSI measurements among all anchor node pairs. In general, **S** is not symmetric in radio links; diagonal entries of **S** contain the self-RSSI values.

Let us define **d**_{i} = [d_{i1} , d_{i2} , …, d_{iM} ]^{T} with d_{ij} is the Euclidean distance between anchors *i* and *j*, in which case of the distance vector, and the corresponding matrix **D** = [**d _{1}**,

**d**, …,

_{2}**d**

_{M}] is symmetric and has zero diagonal entries. Note that these are the solely parameters that require to be manually obtained during the system calibration phase, as they depend on the specific hardware. A linear relationship between the RSSI measurements and the logarithm of the inter-node distance by Lim

*et al*. (2010) is followed(2)

where $\text{T}\in {\mathbb{R}}^{\text{MxM}}$ is the signal-to-distance mapping and each row vector is represented as a linear combination of the columns of **S**, weighted by the elements of the *i*-th row ${\text{t}}_{\text{i}}^{\text{T}}$. The matrix **T** is calculated by least square method as(3)

Once the signal-to-distance mapping is determined, the client node’s localization proceeds collecting RSSI measurements between itself and its adjacency anchor nodes in the vector $\widehat{s}$. Then, the corresponding distance vector $\widehat{d}$ can be computed as

The location based on the obtained distance vector $\widehat{d}$ is estimated owing to apply a gradient descent algorithm by Lim *et al*. (2010). It is exploited in LAURA system to initialize tracking.

### 2.3.Improving Localization Via Particle Filtering

The paper focuses on improving the wireless patient tracking in LAURA PLT (Personal Localization and Tracking) module by Redondi *et al*. (2010, 2013) to smooth down the functions of the RSSI variations due to PF.

In case of static conditions and fixed client nodes, the localization algorithm described above achieves slightly perfect when robust RSSI measurements can be obtained by means of the time averaging. On the other hand, when the node is attached to the patient’s body, in which case of moving client nodes, severe short-time RSSI fluctuations are observed. As such, the accuracy of the estimated location decreases, and artificial motion discontinuities are detected.

The localization accuracy is improved with a help of incorporating a-priori knowledge about the moving node and the geometry of the environment. In this system, a PF to track the position of the client node over time is used. At each time instant t, the state of the node is represented by the vector $\text{z}\left(t\right)={\left[\text{x}{\left(t\right)}^{\text{T}},\text{\hspace{0.17em}v}{\left(t\right)}^{\text{T}}\right]}^{\text{T}}$ where $\text{x}\left(t\right)\in {\text{R}}^{2}$ indicates the position of the node and $\text{v}\left(t\right)\in {\mathbb{R}}^{2}$ its velocity.

The sequence of previous states and all available observations are given. It is defined ${\text{z}}_{p}\left(t\right)={\left[{\text{x}}_{p}{\left(t\right)}^{\text{T}},\text{\hspace{0.17em}}{\text{v}}_{\text{p}}{\left(t\right)}^{T}\right]}^{T}$ PF estimates the posteriori probability density function of the state **z** at time *t* with *k* = 1, …, *N* and *w _{p}* (

*t*) is a weighted set of particles. The state of the target node is then point-wise estimated as:

where Δ_{T} is the sampling period, and *α _{x}*,

*α*are the adaption rates which are tuned experimentally based on the node dynamics.

_{v}For the problem at hand, PF alternates the execution of two steps: prediction and update. Firstly, these assumptions of system will be discussed. Then, prediction step will be shown in the next part. Finally, how to update process will be mentioned in the last.

#### 2.3.1.Assumptions

The location of a target node $\widehat{x}\left(0\right)$ is the first thing at startup which can be computed by using the estimated vector $\widehat{d}$ through a gradient descent method which minimizes the following cost function:

where ${\widehat{d}}_{\text{i}}$ is the estimated distance between the target node and the *i*-th anchor and *θ*_{i} is a weight factor computed as(7)

By differentiating equation (6) with respect to **x** it is possible to obtain an iterative equation used to update the estimated $\widehat{\text{X}}$(8):

with *α* is set to 0.1; the initial estimate ${\widehat{x}}^{\left(0\right)}$ is equal to the location of the nearest anchor node, and then ${\text{z}}_{\text{p}}\left(0\right)={\left[{\text{x}}_{\text{p}}{\left(0\right)}^{\text{T}},\text{\hspace{0.17em}}0,\text{\hspace{0.17em}}0\right]}^{\text{T}},\text{\hspace{0.17em}}\forall p$.

Secondly, since the initialization of the particle filter might potentially locate the target outside the building, the geometry leverage’s knowledge of the environment to constrain the initialization point to lie inside the building.

#### 2.3.2.Prediction

At each step, the new state vector for the *p*-th particle is computed. The following dynamic model constructed upon kinematic equations is employed.(9)

with *ξ*_{x} and *ξ*_{v} are samples drawn from zero-mean Gaussian random variables with variance, respectively, ${\sigma}_{{\xi}_{\text{x}}}^{2}$ and ${\sigma}_{{\xi}_{\text{v}}}^{2}$, which provide the system with a diversity of hypotheses. If the geometry of the environment is known, it can be incorporated in the dynamic model, e.g. by assigning *w _{p}* = 0 to those particles that crossed a wall.

#### 2.3.3.Update

After the prediction step, the weight of the *p*-th particle is updated based on the RSSI measurements collected at time *t*. Firstly, the vector $\widehat{d}$ as descried in (4) is obtained. Then, we assign a weight computed as

where ${\widehat{d}}_{\text{i}}$ is the estimated distance, and ${\Vert {\text{x}}_{\text{p}}\left(\text{t}\right)-{x}_{\text{i}}\Vert}_{2}$ is the current distance of the particle from the *i*-th anchor.

In order to avoid the critical situation, the weight distribution of the particles in (10) is tested. When all particles are trapped within a room while the client node had moved outside.(11)

where *τ*_{1} is a threshold value that we set equal to 10^{-5} in our experiments. If the test fails, particles will not correct tracking the client node. Thus, the PF is re-initialized as described below. Otherwise, the particle weights are normalized such that ${\sum}_{\text{p}=1}^{\text{N}}{w}_{\text{p}}\left(t\right)=1$ and the client node’s position is determined based on (5).

Working with PFs as customary, a resampling step will perform if the weight distribution is severely skewed, such that has just a few particles with non-negligible weight. In this paper, two kinds of resampling steps, namely SIR and KLD-resampling, will be discussed in the next step.

### 2.4.Error Criteria

Let us define *Er* to be the error for measuring the performance of the residuals which are considered the difference between the observed and predicted values.(12)

where ${\widehat{x}}_{\text{i}}$ and x_{i} are respectively the predicted location and the observed location at the *i*-th observation, and K is the number of observations.

## 3.SIR AND PROPOSAL ALGORITHMS

In this section, SIR PF is reviewed and our proposal (KLD-resampling PF) is presented.

### 3.1.SIR Algorithm

The concept of the auxiliary PF has been introduced by a number of researchers such as Redondi *et al*. (2010, 2013) and Arulampalam *et al*. (2002). A PF is also known as the bootstrap filter, Monte Carlo technique, condensation algorithm, interacting particle approximations and survival of the fittest. The key idea is to represent the required posterior density function by a set of random samples (particles) with associated primary weights, and to compute the estimates based on the samples and primary weights. As the number of samples becomes very large, the Monte Carlo characterization is the closest equivalent for representation of the posterior probability function, and the solution approaches the optimal Bayesian. Schön (2010) introduced the sequential importance sampling (SIS) algorithm, shown in Algorithm 1 for the PF, includes a resampling step at each instant as described in detail in the reference. The SIS algorithm uses the important density, which is a proposed density to represent another one that cannot be exactly computed, that is the sought posterior density in the present case. Hence, samples are drawn from the important density instead of the actual density. The degeneracy phenomenon is known as a common problem with the SIS PF, which all but one particle has negligible primary weight after a few states. Thus, a large computational effort is devoted to updating particles whose contribution to the approximation of the posterior density function is almost zero. This problem can be overcome by increasing the number of particles or more efficiently by approximately selecting the important density. In addition, the use of resampling technique is recommended to avoid the degeneracy of the particles by Arulampalam *et al*. (2002), as Alg. 1. The pseudo-code of SIR PF is shown in Alg. 2.

### 3.2.Proposal Algorithm

In this subsection, we discuss two problems. First, we describe our KLD-resampling for applied PF. Second, we present a way to determine the optimal bound values of resampling parameter for our method, as shown in Algorithm 4. Because our proposal which relates to KLDsampling, thus we briefly overview this method.

Fox (2003) introduced KLD-sampling to determine the number of samples such that, with probability 1−*δ*, the error between the true posterior and the sample-based approximation is less than *ε*. KLD is used to show how to determine the number of samples so that distance between the sample-based maximum likelihood estimate and the true posterior does not exceed a pre-specified threshold.

The KLD between the proposal (q) and (p) distributions is defined in discrete form as(13)

with W(x) = p(x) / q(x). The required number N_{r} of samples is determined as follows

where *l* is the number of bins with support, the quantizes of Chi-square distribution is computed as(15)

Based on Wilson-Hilferty transformation by Johnson *et al*. (1994) to compute the approximation of ${\chi}_{l-1,\text{\hspace{0.17em}}1-\delta}^{2}$ in (14) is expressed as

where z_{1−δ} is the upper quartile of the standard normal distribution.

The undesirable results above are caused by the number of particles needed to approximate a discrete distribution with an upper bound *ε* on the KLD. In other words, the KLD-sampling method leads to statistical bounds of the approximation quality of samples that are actually drawn from the proposal rather than the true posterior distributions. Therefore, the mismatch between the true and the proposal distribution is ignored,

#### 3.2.1.KLD-resampling

To avoid the mismatch from KLD-sampling, the total number of required particles in (16) is applied in the resampling process better than sampling one. Li *et al*. (2013, 2015) introduced this method which divides the particles of the posterior distribution into bins and count the number l of bins which at least one particle is resampled to determine the total number of particles to resample. Therefore, the required number Nr in (16) is rewritten as(17)

The Pseudo-code of proposal is shown in Algorithm 3.

We propose an algorithm to find the upper bound error as shown in Algorithm 4. Let us denote *ε*_{i} to be the upper bound error of *i*th, $E{r}_{{s}_{1}}^{\mathrm{Pr}o}$ and $E{r}^{\text{SIR}}$ which are respectively the error of proposal at *ε*_{i} and that of SIR; and *ε*_{i*} (line 11) is the set of the upper bound value that fulfils the condition of Remark (line 10). Let us define $\Delta Er=\left[\Delta E{r}_{{\epsilon}_{1}},\text{\hspace{0.17em}}\cdots ,\text{\hspace{0.17em}}\Delta E{r}_{{\epsilon}_{Q}}\right]$ with $\Delta E{r}_{{\epsilon}_{1}}=E{r}^{\text{SIR}}-E{r}_{{\epsilon}_{1}}^{\mathrm{Pr}o}$ is the gap of error value between SIR and proposal at *ε*_{i}. Algorithm 4 provides the optimal bound value of proposal.

**Remark:** if Δ*Er* > 0 (lines 10-13), then a value *ε*_{opt} (line 16) exists to maximize the function Δ*Er*.

## 4.SIMULATION RESULTS

We conducted a series of simulations via the data sets, which are used from publicly available by ANT Lab, to determine the optimal bound value of resampling parameter in (16) based on Algorithm 4. Next, the performance of tracking error, various power levels of self- RSSI by Ren and Meng (2009) and the gap error for all methods are evaluated under four density nodes by Redondi *et al*. (2010, 2013) and Ly-Tu *et al*. (2015). The network architecture described by Lim *et al*. (2010) which consists of MicaZ and IRIS motes operated by the TinyOS.

All simulation results are run on PC Core i5-2400 @3.10GHz, 4.00GB RAM and MATLAB 2012a (7.14.0.739). Furthermore, all parameters assignment for this system is given in Table 1.

The localization system performance integrated PF is evaluated to describe in Section 2. A client node carried by a person is tracked that followed a known path in an indoor area of around 250m^{2}. The node was placed at each test point for 30 seconds, so that 30 RSSI packets were sent to the PAN coordinator. To synchronize the RSSI message reception with the actual position of the patient node, the path was sampled each 90cm. For every RSSI packet, the node’s location is estimated using the algorithm which described in Section 2.

The test was repeated for different density nodes and different power levels. To discard the bias related to a specific anchor node, the choosing estimation in random subset of anchor nodes was repeated for each density node.

The results in Table 2 show the optimal bound values with various power levels of self-RSSI in term of four density nodes for our proposal. For example, at power level of self-RSSI (-15dBm), the bound values for four various density nodes are respectively about 0.3, 0.95, 0.75 and 0.9 (see column 7 in Table 2). First, we set up these values for our proposed method in Figure 3 to compare the cumulative density function of the tracking error for Gradient descent, SIR with 200 particles, SIR with 400 particles and proposal with 200 particles is shown in Figure 2. Final, all these values in Table 2 are applied to evaluate the performance of gap error vs. four various density nodes for all these methods and examine thoroughly the impact of density nodes on the whole power levels for our proposal in Figure 3 and Figure 4, respectively.

The comparison of the cumulative density function of the tracking error for Gradient descent, SIR with 200 particles, SIR with 400 particles and proposal with 200 particles in case of power level (-15dBm) for four density nodes (5nodes, 10nodes, 15nodes and 20nodes) is illustrated in Figure 2. It is observed that the tracking error of each density node, at errors in 80% of the cases (Pr < 80%), for Gradient descent method achieves below 5m, around 4m and below 2.5m are respectively in Figure. 2a, Figure. 2b, and Figure. 2cd. As enabling the PFs, the localization accuracy of our proposed is better than that of SIRs and Gradient descent methods. For example, at Pr = 0.8 in Figure 2c, it verifies that the curve of the localization accuracy for our proposal with 200 particles enhances respectively about 0.11 and around 0.37 that of SIR with 200 particles and SIR with 400 particles. It also verifies that our proposal not only reduces the number of used particles because of the selected upper bound error with fixed probability of resampling parameter, but also improves the average patient localization error compared with SIRs PF in clinical environments. Specially, as increasing the number of nodes our curve can improve the performance of the tracking error compared with others and satisfy the accuracy error from 3m to 5m by Farid *et al*. (2013).

We further study the relationship between the gap error and various power levels for all methods under different density nodes is shown in Figure 3. At 5 nodes as illustrated in Figure 3a, overall particles used of proposal is almost equal to the number of particles used of SIR, it confirms the statistical gaps error of the proposal and SIR are respectively about 0.8, 0.38, 0.22, 0.2, 0.04, 0.26, and 0.36 when changing power levels of self-RSSI from -25dBm to 0dBm, respectively. In addition, as increasing the number of particles for SIR to twice, our method performs slightly about 0.03, 0.11, 0.04, 0.1, 0.06, 0.05, and 0.19 when compared this method. Similar to Figure 3a, when changing the density nodes from 0.02 nodes/m^{2} to 0.08 nodes/m^{2}, the proposal also performs with traditional others.

Before reaching a conclusion, it is important to examine thoroughly the impact various power levels on the tracking error for our proposal from 0.02nodes/m^{2} to 0.08nodes/m^{2} as illustrated in Figure 4. First, Figure 4d shows the performance of tracking error, the all curves of our proposal converge and reach below 2m at Pr = 80%. On the other hand, with 20 nodes or 0.08nodes/m^{2}, the proposal can apply to any power levels or can use with a single transmit-power information. Moreover, the proposal in case of 15 nodes fulfils hard accuracy criteria only two power levels which are 0dBm and -3dBm. Figure 4a and Figure 4b illustrate to find power levels for improving tracking error from 3m to 5m in others density nodes.

## 5.CONCLUSIONS

In this study, a KLD-resampling under the optimal bound error of resampling parameter is introduced to smooth down the fluctuations of RSS variations. Ultimately, the wireless patient tracking is enhanced using WSN with a PF system structure that is designed by these parameters. Furthermore, in using these bound error values, this technique reduces the number of particles used and produces accurate patient location in real environment. We conduct a large set of simulations to study the tracking error, the relationship between the gap error and various power levels, and the effect of different density nodes under the whole power levels. For future work, we will assess PFs based on an efficient KLD-resampling algorithm on board. Future works, we will analyse the problem under probability to show the great contribution in the slope of the mean number of particles used.