1.INTRODUCTION
Complex systems generally operate in dynamic environments. Assuming the operation of complex systems in dynamic environments necessitates assumption that they may encounter perturbations that threaten their value delivery. So system designers should design systems, which continue providing acceptable value to their stakeholders in different situations. There are numerous nonfunctional requirements (NFRs) that may help systems maintain their value delivery in spite of contextual changes. Each NFR has many design principles associated with it. So system designers can propose different types of system architectures using one or a combination of these principles that meet system’s functional requirements. The question is that which of these potential system architectures has the better ability under an unanticipated situation in an uncertain environment.
Considering 3 factors; system parameter, outcome parameter and perturbation type, the main NFRs that have need introduced by researchers are Changeability (Westrum, 2006), versatility (Westrum, 2006), Survivability (Richards et al., 2007) and Robustness (Smart et al., 2008). Figure 1 represented the applicability of each mentioned property in a different situation regarding mentioned factors. For example, if stakeholders require that a system does something it was not designed to do, either because the context has changed or because their needs have changed, then the system will need to be either versatile or changeable to be valued robust.
To assess the ability of complex engineered systems under uncertainty it is better to concentrate on a unique criterion that is independent of determinative factors. To do so, Mekdeci (2013) defines viability as the likelihood that an engineered system will provide acceptable value to its stakeholders, over its life era. Figure 2 shows the situation of viability againts 3 mentioned properties.
Viability is a criterion of systems' ability under uncertainty which can be used for design, analysis, and improvements in the earlier phase of system development. But the main problem is that there is no comprehensive model in the literature for measurement of viability. So, to surmount above problem a 9 step mathematical model is proposed for measuring the viability of assumed system architectures under uncertainty which can be used as the basis of compression and tradeoffs for selecting the optimized system architecture under uncertainty conditions.
For the purpose of the study, the rest of the paper is structured as follows: Section 2 is dedicated to the literature review in the field of study. Section 3 describes a methodology to design complex engineered systems for viability. The application of the proposed model is represented using a simplified illustrative example of Synthetic Aperture Radar (SAR) satellite in section 4 and finally, conclusions and ideas for future works are presented in section 5.
2.LITERATURE REVIEW
In this section, more relevant investigations in the field of the study are presented. First, the description of conceptual models for architecting Systems with NFRs is presented which is used as a basis of the proposed model. Then, an overview of viability concepts and quantitative methods is illustrated.
2.1.Review on Conceptual Models for Architecting Systems with NFRs
There are many attempts at designing NFRs in complex engineered systems and assessing their abilities in the face of uncertainty. Generally, researchers take 8 steps for this mean as follows:

Step 1: is determining value proposition and constraints. This step focuses on identifying all external influences and their potential impact on the value delivery of the CES (Ferson et al., 2004).

Step 2: mainly discusses identification of potential perturbation that system may confront (Beesemyer, 2012). Perturbation taxonomy that can help identifying the ways in which the system may fail to deliver value, is the main output of this step (Mekdeci et al., 2012).

Step 3: is the identification of the NFRs in the CES to promote the desired longterm behavior of them. Parts of this step rely on the semantic basis for NFRs, which provides the means for associating a given NFR to a specific definition, based on a set of differntiating categories (Rissanen, 1978; Ross et al., 2011). Gathering direct and implied NFR requests from stakeholders, tracing perturbations is identified in step 2 and finalizes a list of potentially useful NFRs, given mission needs and constraints, which should put forward into analysis and be used to distinguish between architecture selections are the main activities in this step.
The goal of step 4 is to generate highlevel concepts for CES architectures. It consists of brainstorming potential new constituent systems (form), as well as formulate the various CES conceptofoperations. This step includes several tasks as follows:

Definition of highlevel arhitecture concepts, given designated value proposition and constraints of step 1.

Generation of candidate CES forms (Mekdeci et al., 2012)

Conducting designvalue mapping qualitative assessment of the potential CES concepts’ fulfillment of stakeholders’ needs (Ross et al., 2009)

Finalizing the design space, and record all assumptions made.
The next step (step 5) is to generate options that are resulting in the desired NFRs when they are added to system architecture (Ricci et al., 2013). Conducting the perturbation to architecture mapping is very useful in identifying the most “impactful” perturbations. This process consists of tracing perturbations to design variables and attribute list to estimate the design variables and attributes which are impacted by changes. Select relevant design principles and perform cause and effect mapping are the ways to trace out the cause and effect relationships between perturbations and the CES (Wasson, 2006; Mekdeci, 2013). After generating a comprehensive list of options, the next step is to evaluate and compare them to select the final list of options to consider. Some metrics to evaluate options are:

Number of Uses: the number of times a particular option can be employed.

Cost: approximate cost of including the option in the CES architecture (acquiring, carrying, and executing).

Perturbation coverage: considering the impact and probability of perturbations, this metric evaluates the approximate coverage of the perturbation space using a given option (or portfolio of options).

Optionability: the number of options that are linked to a particular path enabler/inhibitor (Mikaelian et al., 2009).
By sing evaluation, a final list of options for consideration is obtained.
In step 6 Evaluation of various CES architecture alternatives in terms of different metrics, including value metrics (i.e. attributes and costs) and NFRs metrics is done through two tasks as follows (Mekdeci et al., 2012):Table 1

Developing an abstract architecture for CES model, including all important models: performance, cost, value, etc.

Evaluating performance of architectures within each epoch (context and needs fixed).
The purpose of step 7 is to develop and define of tradeoffs within various CES architectures. Conducting singleepoch analysis and MultiEpoch Analysis as well as Era Analysis as a consideration of timesequences of epochs are the main tasks of this step (Fitzgerald et al., 2012; Fulcoly, 2012). Some NFR metrics can be used in this step are presented in (Ricci et al., 2013; Richards, 2009):
Alternatives that perform well in the NFR metrics can be identified to be traded with alternatives that perform well in other metrics, such as cost or utility.
The final step of the process (step 8), involves the final selection of architecture and design by using the analysis results that were taken in step 7.
2.2.Overview of Viability Concepts
As it is discussed earlier, based on Mekdeci (2013) research viability is selected as the criterion for assessing CES’s ability under uncertainty because of:

There are a number of nonfunctional requirements with complex interrelationships. For reduction of ambiguities in the calculation, it is better to concentrate on a single criterion.

Against other NFRs, viability is independent of 3 parameters (i.e. system parameter, outcome parameter and perturbation type) and covers all of them.
Advantages of viability such as dynamics and etc. which have been mentioned earlier.
Viability: Viability is the likelihood that an engineered system will provide acceptable value to its stakeholders, over its life era. Based on Mekdeci’s research the main concepts of viability are as follows (Mekdeci, 2013):

Viability is applicable to all engineered systems, whether they are traditional, monolithic systems, or large systems of systems.

Viability is Subjective. Whether a system is viable or not, it is determined by how well the outputs of the system are likely to satisfy stakeholder needs.

Viability Is Dynamic. Viability is a prediction about whether the system will provide acceptable value to its stakeholders over its life era. What constitutes the life era is a prediction made by the stakeholders at the time viability is assessed.

Viability is Relative. A system can be more or less viable than another system or to itself, if something changes since viability are likely. The more likely that a system will provide acceptable value to its stakeholders over its life era, the more viable it is.

Viability does not mean Existence. It is possible for an engineered system to exist, for a finite period of time, without being viable.
Mekdeci in his doctoral research has focused on presenting viability concepts and principles and recommends the development of a quantification model for next studies. Based on the literature review, only Adams has presented a model which used the following measurement questions for measuring the system’s viability as it is represented Table 2 (Adams, 2015).
He used a Likert scale (Table 3 and expanded Equation 1 for measuring the viability of systems:(1)
Although we found no comprehensive mathematical model on quantification of viability and designing them for viability, it seems that’s better to develop our model as the other researchers proposed for designing NFRs in complex engineered systems. So we have developed our model based on Ricci’s conceptual model (Ricci et al., 2014) and use Viability as the main criterion for this process.
3.METHODOLOGY
This section develops a model to design complex engineered systems for viability. As it is discussed earlier the model should have 3 main characteristics as follows:

It must reasonably describe the uncertainty in the operational environment.

It must translate how the operational uncertainty will affect the functional and physical demands of the system.

It must calculate the viability of assumed architecture under uncertainty by representing the regions in the system that is mostly impacted by the operational uncertainties.
Considering above characteristics described for proposed model, a 9 step model is proposed in Figure 3. Each level is described in detail through the following steps.

Step 1: The objective of this level is to understand and define uncertainty in the operational environment. Wherever uncertainty is present, the designer is incentivized to keep options available for future use. This step provides a series of scenarios, focused on varying missions and operational tasks to ensure a complete assessment of the functions of a system in a realistic operational context. Series of the scenarios are shown in Equation 2.
In which S_{i} is the i_{th} scenario. To analyze the impact of scenarios on the system architecture, each scenario should score based on expert’s opinions and engineering judgment. So, similar to scoring program risks, a 5x5 matrix (Figure 4) used to represent the likelihood and consequence of each scenario.
A basic rubric based on Pierce (2010) research is used to assist the collaborative efforts of scoring each scenario when only limited types of information are available that are represented in and Table 5.
Finally, in this step, each scenario’s score is calculated as:(3)
In which ${s}_{likelihood}^{i}\hspace{0.17em}\text{and}\hspace{0.17em}{s}_{opportunity}^{i}$ are the likelihood and opportunity of i_{th} scenario.

Step 2: This step requires a functional analysis of the system to define those additional functions required to accomplish the mission scenarios developed in the previous step and relates the developed scenarios to the system architecture.
On the other side, system attributes, sometimes described as key performance parameters (KPP) were used to represent the set of functional requirements providing the desired performance. Functional requirements and their relation with system attributes can be represented as Equation 4 in which a_{i} is the i_{th} attribute of system that contain several functional requirements (FR_{1}, FR_{2}, …).(4)

Step 3: The objective of this step is to create the necessary link between operational uncertainty and design implications. For this objective, the design structure matrix (DSM) is utilized as a modeling technique to represent the system, its interfaces, and the intensity of its relationships. The relationship between endogenous and exogenous variables is explored in this step as a mean to understand how each scenariogenerated functional requirement affects the physical design variables. So the impacted design parameters and physical characteristics can be distinguished to meet a new or changed functionality requirement.
For a system with k design variables and n attributes, DSM forms as Figure 5.
The DSM is a square matrix with k+n rows and columns, of which entries i, j and j, i are equal to 1 (or sometimes denoted with an X) if the two variables i and j are coupled. It’s important to note that each attribute can also be expressed as a set of its constitutional decomposed functional requirements. A simple example demonstrates the relationships between toplevel requirements, attributes, design variable and the related DSM is represented in Figure 6.

Step 4: The objective of this step is to identify variables which are more sensitive to changes in the operational demands. The goal is to find the design variables that should be changed, and also the extension of this change through a change in system attributes.
The idea of a sensitivityDSM (sDSM), in which the entry i.j represents the normalized sensitivity of the parameter i to unit changes in the parameter j, was used to find sensitive regions in the architecture. For the design vector:(5)
The sDSM is a square matrix with k rows and columns, whose normalized entry I_{i,j} represents the percent change in variable i caused by a percent change in the variable j.(6)(7)
In the Equations 5, 6 and 7, x_{i} is the i_{th} variable in the design vector, x^{*} is the normalized value of x and d_{x} is the percent change of variable x.
Furthermore, sDSM was extended to include the sensitivities of design variables to changes in functional requirements. Functional requirements are represented as a_{j} in Equation 8.
Each design variable was affected directly from the change in functional requirement, or indirectly from a propagated change in another design element. This consideration is expressed as follows:(9)
Equation 9 states that the required change in xi is the cumulative change caused by all the functional requirements and other design elements to which x_{i} is sensitive in the neighborhood of x_{i}^{*}.

Step 5: after DSM is filled completely, partitioning algorithm should be used to consolidate physical design elements that are highly responsive to the changes imposed by future used cases or scenarios.
There is a wide range of clustering algorithms, a sample of which can be found in (Alexander, 1964; Gutierrez, 1998; Hartigan, 1975; Thebeau, 2007; Whitfield et al., 2002). In this case, the combination of the following models is proposed for clustering the generated DSM matrix.

Using fuzzy relational clustering algorithm for getting out different clusters (Skabar and Abdalgader, 2013).

Using an objective function based on the minimum description length (MDL) principle for optimizing clustering process (Grünwald and Rissanen, 2007; Rissanen, 1978).
The main point of Skabar research is based on the Page Rank algorithm. In this algorithm, the importance of a node within a graph can be determined by taking into account global information recursively computed from the entire graph, with connections to highscoring nodes contributing more to the score of a node than connections to lowscoring nodes. It is the importance that can then be used as a measure of centrality. PageRank assigns to every node in a directed graph a numerical score between 0 and 1, known as its Page Rank score (PR), and defined as:(10)
Where In (V_{i}) is the set of vertices that point to V_{i}, Out (V_{j}) is the set of vertices pointed to by V_{j}, and d is a damping factor, typically set to around 0.8 to 0.9. Nodes visited more often will be those with many links coming in from other frequently visited nodes, and the role of d is to reserve some probability for jumping to any node in the graph, thereby preventing getting stuck in a disconnected part of the graph.
Although originally proposed in the context of ranking web pages, PageRank can be used more generally to determine the importance of an object in a network.
The PageRank algorithm is easily modified to deal with weighted undirected edges, resulted:(11)
Where w_{ji} is the similarity between V_{j} and V_{i}, assumed to be stored in a matrix W = {w_{ij}}, similar to the affinity matrix.
Skabar proposed algorithm uses the PageRank score of an object within a cluster as a measure of its centrality to that cluster. The PageRank values are then treated as likelihoods. Since there is no parameterized likelihood function as such, the only parameters that need to be determined are the cluster membership values and mixing coefficients. The algorithm uses Expectation Maximization to optimize these parameters.
The similarities between objects are stored in a similarity matrix S = {S_{ij}}, where S_{ij} is the similarity between objects i and j. This algorithm has 3 main steps as follows:
Initialization: Cluster membership values are initialized randomly, and normalized such that cluster membership for an object sums to unity over all clusters. Mixing coefficients are initialized such the priors for all clusters are equal.
Expectation step: The Estep calculates the Page Rank value for each object in each cluster. PageRank values for each cluster are calculated as described in (Equation 11), with the affinity matrix weights w_{ij} obtained by scaling the similarities by their cluster membership values; i.e.(12)
Where ${w}_{ij}^{m}$ is the weight between objects i and j in cluster m, s_{ij} is the similarity between objects i and j, and ${p}_{i}^{m}$ and ${p}_{j}^{m}$ are the respective membership values of objects i and j to cluster m.
The intuition behind this scaling is that an object’s entitlement to contribute to the centrality score of some other object depends not only on its similarity to that other object but also on its degree of membership to the cluster. PageRank scores treated as likelihoods and used to calculate cluster membership values.
Maximization step: Since there is no parameterized likelihood function, the maximization step involves only the single step of updating the mixing coefficients based on membership values calculated in the Expectation Step. More details about this maximization algorithm and related pseudo code can be found in Skabar’s research (Skabar and Abdalgader, 2013).
As the number of clusters can be varied from 1 to N_{n} (number of elements in DSM matrix), the optimization step is added to the Skabars algorithm for determination of the optimum number of clusters.
Optimization step: This step uses an objective function based on the minimum description length (MDL) principle (Richards et al., 2007; Grünwald and Rissanen, 2007). The MDL clustering metric is given by the weighted sum of the model description length and the mismatched data description given as Equation 13.
Where n_{c} is the number of clusters in the DSM, n_{n} is the number of rows or columns in the DSM (i.e. DSM elements), cl_{i} is the number of nodes in the i_{th} cluster, logarithm base is 2, α and β sets between 0 and 1 and S_{1} and S_{2} determine as follow:
First, another matrix (DSM’) has generated, where each entry d_{ij}′ is ‘‘1’’ if and only if: (1) some cluster contains both node i and node j simultaneously, or (2) the bus (last cluster) contains either node i or node j. Then, compression between d_{ij}′ with the given d_{ij} should be done. For every mismatched entry, where d_{ij}′ ≠ d_{ij}, a description need to indicate where the mismatch occurred (i and j) and indicate whether the mismatch is zerotoone or onetozero. So S_{1} and S_{2} can be defined as follows:(14)(15)

Step 6: Step 6 combines the LikelihoodOpportunity scores from the scenarios in Step 1 with the design sensitivity information from Step 4 as it is shown in Figure 7.
This step provides insight into the regions in the CES architecture where changed or new functional requirements have most effects.

Step 7: In this step, Quantification of viability based on sensitivity regions can be done as follows: Based on the generated matrix in step 6, viability can be calculated as:
where CSRV is equal to the sum of LO * sensitivity numbers, z is the number of elements and MSRV is equal to the sum of maximum LO value * maximum sensitivity value for all occupied (sensitive) cells and can be obtained by Equations 17 and 18.

Step 8: The purpose of this step is to generate the options which as they are added to the system architecture, resulted in increasing system viability. Selecting the relevant design principles associated with increasing viability value (e.g., the design principle of margin) is the main task for this step. Design principle to perturbation mapping is the way to generate potential options. For this aim, the design principles related to viability are mapped to the list of perturbations that can potentially impact the CES. The mapping consists of brainstorming instantiations of design principles that can inhibit or enable CES changes, as a response to the perturbation. For example, the design principle of modularity can inspire the installation of a modular subsystem on the CES, so they can accommodate different mission needs at a later point in time. Design principles associate with the viability and some other NFRs are represented by Mekdeci (Mekdeci, 2013).

Step 9: The final step of the process is that going back to step 3 to analyze the generated data in step 8. The adjustment of DSM, functional to physical mapping, sensitivity analysis, clustering the regenerated DSM and recalculating the viability value based on the updated system architecture should be done in this step. Finally, the viability value of different system architectures can be used as the basis for tradeoffs within various CES architectures in terms of viability behavior.
It should be noted that assumed system architecture meets all functional system requirements before and after imposing changes to the system architecture by adding viability options and using viability principles.
The applicability of the proposed model represented by using a simplified illustrative example of Synthetic Aperture Radar (SAR) satellite as a complex engineered system in the next section.
4.ILLUSTRATIVE EXAMPLE
In this section the applicability of the proposed model is demonstrated by simplifying illustrative example of Synthetic Aperture Radar (SAR) satellite as a complex engineered system. In Step 1, different mission scenarios are developed to understand and define uncertainty in the operational environment, then the represented scenarios are scored based on Table 4 and table 5 and the results are presented in Table 6. The additional functions for accomplishing the mission scenarios which have been developed in the previous step are identified. These functional requirements are listed in Table 7.
Subsets of the system function that affect highlevel performance characteristics could be consolidated by defining system attributes. Each operational scenario needed some change to one or more system attributes in order to respond to the new functional requirements. For simplicity, the functional requirements for each scenario were replaced by the affected system attribute, are shown in Table 8.
In step 3 system attributes were mapped to design variables and presented in Table 9. Then the design structure matrix which is populated by using a sample SAR block diagram and the expanded DSM model are presented in Figure 9.
To achieve the results of the expanded DSM, the implementation of Step 4 and step 3 have been done simultaneously. A sensitivity analysis has been performed to quantify the extent to which the system design variables must change in order to accommodate the changing requirements. As the SAR system model is large and the relationships are very complex, only three attributes related to the SAR payload have been chosen for the rest of the analysis (i.e. Scenario 1, scenario 2 and scenario 3 that have affected attribute 1, 2 and 5). Attributes have been modeled by using physical and mathematical relationships which are related to system parameters. These relationships are presented in Figure 8 and are simulated in MATLAB using Equation 9. The results of sensitivity analysis are presented in Tables 10, 11 and 12.
Going down the list of design variables, the top has been assigned the most sensitive value of 2, while those at the bottom of the list have been assigned to the value of 0.5. Then sensitivity value has been propagated through the DSM three tiers/levels.
In step 5 the sDSM has been clustered by using an algorithm which is adopted to minimize the model that is described in the previous section by using MATLAB. The parameter α and β set at $\frac{1}{3}$ for calculations, based on the Yu suggestion (Yu et al., 2007). The clustered sDSM which is displayed in Figure 10 contains 10 clusters of which cluster 4 and cluster 5 are the sensitive regions to the mission scenarios. As cluster 9 has no design parameter and subsequently no physical element, based on the assumed system architecture, it can be inferred that scenario 3 has no effect on system parameters and the swath can be changed without changing in system physical parameters.
In steps 6 and 7, by combining the Likelihood Opportunity scores from the scenarios in Step 1 with the design sensitivity information from Step 4 on the clustered sDSM matrix and using Equation 16, the Viability of the system is obtained as 0.47. This value for viability parameter shows that the system is not very powerful under uncertainty and engineers should have more work on the optimization of system design so that the V parameter will be increased as possible.Figure 9
In step 8 based on clustered sDSM the following options have been generated to add to the system architecture for the purpose of increasing system viability. For attribute 1 (resolution range) which is related to scenario 1 it has been decided to:

1. Replacing digital unit processor with a highperformance processor.

2. Using design principles of margin, it has been decided to increase data storage memory up to 150% which enables the payload digital unit to store more data volume that results from increased resolution.

3. Increasing the transmitter data rate for enabling a system to transmit more data to ground stations.
For attribute 2 (IRF) it has been decided to improve the SAR antenna design to decrease side lobes that results in increased IRF, which is related to scenario 2.
In the final step of the process (step 9) based on generated options that have been added to system architecture, first the system DSM matrix has been adjusted and then all steps from step 3 to step 7 have been implemented again. After the recalculation of viability value, the results of the process (Figure 11) have shown that:
Because of the viable principles and options which have added to system architecture, the viability value of the system is improved from 0.47 to 0.55.
By comparison of clustered sDSMs (Figure 10 and Figure 11) it can be concluded that after adding viability options to system architecture:

Attribute 1 has shifted to cluster 2. So there will be no dependency between resolution range and cluster 3 elements after applying viability options. On the other hand, cluster 2 in Figure 11 has shown that to have more viability caused by attribute1, should be concentrated on the relationships between resolution range and power parameters and subsystems for developing appropriate viability options.

Cluster 4 has been shown that though the antenna design improvement cause decreasing in the sensitivity of the system to scenario 2, but the relationships between attribute 2 and cluster 4 elements are still remaining. So it seems that the capability of the assumed system can be improved by choosing and implementing suitable viability options to the system architecture yet.
In the process of accomplishing illustrative example, each step of the process and model (inputs, procedure, and outputs) has been checked and confirmed by aerospace experts for its logicality.
5.CONCLUSION AND FUTURE WORKS
Complex Engineered Systems generally operate in uncertain and dynamic environments. To respond variations in the operational context system designers used viability principles as an option for executing a design decision or feature. As there may be numerous system architectures based on different adaptable options, assessing the viability of these architectures under uncertainty as for the basis of compression and selecting the optimized one is an important problem now a day. To address this gap, in this paper a 9 step mathematical model was proposed which analyzed how the uncertainty will affect the functional and physical characteristics of the system and calculate the viability of assumed architecture under uncertainty by representing the regions in the system that is mostly impacted by the operational uncertainties.
For representing the applicability of proposed model a simplified example of Synthetic Aperture Radar (SAR) satellite observed as Complex engineering system. In the illustrative example, potential operational scenarios were identified and subsequently were scored for their likelihood and conditional impact. Then changes to functional requirements and system attributes necessitated by each operational scenario were determined and imposed on the impacted design variables. Furthermore, a sensitivity analysis was used to identify the design variables which are more reactive to the potential changes. These identified design variables were clustered for quantification of system viability using the information which was generated in different steps. At the end of the process for increasing system viability some viable options were imposed to the system architecture and then the recalculation of system viability was done. It has been shown that the viability options cause increasing system viability value by 8% in the face of uncertainty. It should be noted that all the inputs, the procedures and the outputs of the model were checked by experts to ascertain the logicality of the model.
Because comprehensive search strategy is only practical for small matrix sizes and also greatly times, prohibitively expensive, using metaheuristic strategies such as genetic algorithm in the model is proposed for future work. Also, analyzing the behavior of factors α and β as a variable in the model might be a good idea for future works.