Innovate Bias

Interviews

Kathy Baxter , ... Lana Yarosh , in Understanding your Users (Second Edition), 2015

Bias

Information technology is easy to introduce bias into an interview. Pick of words, your way speaking, and body language can all introduce bias. Bias unfairly influences participants to respond in a way that does not accurately reflect their truthful feelings. Your job as an interviewer is to put aside your ideas, feelings, thoughts, and hopes almost a topic and elicit those things from the participant. A skilled interviewer will word, ask, and reply to questions in a way that encourages a participant to respond truthfully and without worry of existence judged. This takes do and lots of information technology.

Read full chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9780128002322000092

Exploratory Study

Thomas W. Edgar , David O. Manz , in Research Methods for Cyber Security, 2017

Analysis Bias

Data collection tin can introduce bias before piece of work has even begun. If the study volition be sampling (because the unabridged dataset is cost-prohibitive or restricted) the results will be downselected or sampled. The means of option tin introduce what is called sampling bias. If, for case, you lot divided estimator systems into either server or workstation groups and treated those ii groups equally, you would fail to account for the fact that there are typically far more workstations in an enterprise than there are servers.

Another form of bias is systemic bias. This form of bias underlies the unabridged research lifecycle. A classic example would exist a drug company funding pharmacologists to carry studies on their drugs to make up one's mind prophylactic and effectiveness. In cyber security, the like shooting fish in a barrel analogy would be an antivirus/antimalware or IDS vendor contracting a university or research establishment to conduct a security/vulnerability or performance assessment. This is a common do in applied enquiry and a reader might exist suspicious of the results of the pharmacological written report evaluating the drug company's products every bit safe and constructive, and should be every bit suspicious of research that is funded past the subject or stakeholder under evaluation. Now this is not to say the research is automatically unscientific or invalid, merely rather a subconscious, accidental, or deliberate tampering with the setup, the process, or the assay might exist and should be corrected for or addressed in an appropriate manner. Examples might include having the study blueprint and results reviewed past an contained tertiary party, ensuring that personal profit or benefit are not coupled in whatever way to the outcome of the results, or establishing sufficient relationships so that the process addresses and controls for bias inherently.

Another form of systemic bias occurs at the very start of data selection. There is a trend for journals, conferences, and dataset repositories to be either paywalled, or open admission. The authors support this trend, equally democratizing access to data ensures the broadest scientific discourse possible. Still, a side effect of this proliferation is a choice bias toward open access publications and datasets for studies. If a researcher was conducting a written report of average password size compared to countersign complexity, open up and publically available datasets would probable be preferred to paywalled or difficult-to-access datasets. While this might non inherently be a concern, the bias and shaping of the inquiry, even at this preliminary level should exist acknowledged and assessed. Possibly, in this example case, information technology would exist ameliorate to register at a paywalled or backbreaking site, to ensure access to the about relevant and useful data.

The final forms of bias that we will explore come up from psychology. The observer (or experimenter) effect is an example of a subconscious bias where a human observer inadvertently influences or prejudices the subject area. This type of bias is typically not deliberate tampering or fraud perpetrated past the survey, experiment, or study ambassador, but rather subconscious, physiological, or other tells that influence the subject beliefs. An amusing example, from the turn of the last century, with an animal discipline, is the case of Clever Hans. 17,eighteen Hans was an intelligent horse that his owner Wilhelm von Osten would exhibit around Germany. For a crowd, Willhelm would enquire his horse to perform add-on, subtraction, multiplication, and even logic questions. The horse would invariably tap out with his hoof, the correct answer to the amusement of the crowds. This became such a spectacle that a commission was appointed to investigate. In the cease, information technology turned out that the horse was watching his handler/possessor who would become progressively more and more tense, until the right number of taps had occurred, and and then he would relax. The horse merely tapped until the possessor relaxed. This case of unconscious observer-expectancy even led to the bias being chosen the "Clever Hans effect," after the horse.

A similar bias, this time with the subject, is called the "lab coat syndrome." This fourth dimension the subject in an endeavour to please the authority or expert figure, often literally dressed in a lab glaze, will provide the answer or outcome that they conceptualize the observer would like to see. An extreme example of this is the Milgram experiment or study, nineteen where participants are following the instructions of a lab-coated observer in providing shock to another human subject. A simpler instance, would be answering a survey almost countersign forcefulness by lying and telling the survey taker that they never share passwords and always change them to new complex passwords (anticipating and assuming that the observer would like a more positive outcome). A final corollary of this is chosen "lab coat hypertension." In medicine, this is a syndrome when subjects are in a medical environment and have their blood pressure level measured past a medical professional, feel higher blood pressures than they normally practise in outside settings. This physiological response is explained by psychological pressure level placed upon the subjects by being subjected to the medical environment and practitioner process. This is difficult to control for because both the environment and the expert administering the blood pressure test must be removed, reducing the potential fidelity and trust at any at-home measurements.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780128053492000042

INTERVIEWS

Catherine Backbone , Kathy Baxter , in Agreement Your Users, 2005

Bias

It is easy to introduce bias into an interview. A skilled interviewer will know how to word questions that do not encourage a participant to answer in a manner that does non reflect the truth. This takes practice and lots of it. Later, we talk over in detail how to avert introducing bias into the wording of your questions (see page 265) and into your interactions with participants (see page 271).

Honesty

Individuals who are hooked on performance metrics, or who question the value of "anecdotal" data, may pout upon interviews. Sometimes people enquire how you know a participant is telling the truth. The answer is that people are innately honest. Information technology is an extremely rare participant who comes into your interview with the intention of lying to yous or not providing the details yous seek.

However, there are factors that can influence a participant'south desire to be completely forthcoming. Participants may provide a response that they believe is socially desirable or more acceptable rather than the truth. This is known as social desirability. Similarly, a participant may describe the way things are supposed to happen rather than the way things really happen. For example, a participant may describe the process he or she uses at work according to recommended best practice, when in actuality the participant uses shortcuts and workarounds because the "best practice" is too hard to follow – simply the participant does not want to reveal this. Arrive clear that you need to empathise the way he or she really works. If workarounds or shortcuts are used, it is helpful for you to understand this. And of course, remind the participant that all information is kept confidential – the employer will non receive a transcript of the interview.

A participant may also simply agree to whatsoever the interviewer suggests in the belief that it is what the interviewer wants to hear. Additionally, a participant may want to impress the interviewer and therefore provide answers that increase his/her image. This is called prestige response bias. If you lot desire the participant to provide a sure respond, he or she can likely pick up on that and oblige you. You can address these problems by being completely honest with yourself nigh your stake in the interview. If you empathise that you take a stake in the interview and/or what your personal biases are, you can control for them when writing questions. You tin can too word questions (see "Write the Questions," folio 262) and respond to participants in means that tin aid mitigate these issues (e.g., practise not pass judgment, do not invoke authority figures). You should be a neutral evaluator at all times and encourage the participant to exist completely honest with you.

Be careful about raising sensitive or highly personal topics. A survey can be a better option than interviews if you are seeking information on sensitive topics. Surveys can be anonymous, merely interviews are much more personal. Participants may not be forthcoming with information in person. For more discussion on this topic, see "Asking the tough questions," folio 281.

If the participant is not telling the complete truth, this will unremarkably become credible when you seek boosted details. A skilled interviewer can place the rare individual who is not being honest, and condone that data. When a participant is telling a story that is dissimilar from what actually happened, he or she will non be able to give y'all specific examples but will speak only in generalities.

Tip

With continued prodding, a dishonest participant will likely become frustrated and attempt to change the subject. If yous doubt the veracity of a participant's responses, you lot can always throw away the information and interview another participant. Refer to "Know when to move on," page 288.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781558609358500373

Experimental design

Jonathan Lazar , ... Harry Hochheiser , in Inquiry Methods in Human Figurer Interaction (Second Edition), 2017

iii.5.2.2 Bias Caused past Experimental Procedures

Inappropriate or unclear experimental procedures may innovate biases. Every bit discussed previously, if the lodge of task conditions is not randomized in an experiment with a within-group design, the observed results will be subject to the impact of the learning effect and fatigue: atmospheric condition tested later may be consistently improve than conditions tested earlier due to learning upshot; on the other mitt, weather tested earlier may be consistently ameliorate than later weather condition due to fatigue. The biases caused by the learning effect and fatigue button the observed value in contrary directions and the combined upshot is determined by the specific context of the experiment. If the tasks are simple and less susceptible to the learning effect, but tedious and long, the impact of fatigue and frustration may outweigh the touch of the learning effect, causing participants to consistently underperform in after sessions. If the tasks are complicated and highly susceptible to the learning outcome, but short and interesting, the affect of the learning issue may outweigh the impact of fatigue, causing participants to consistently perform better in later sessions.

The instructions that participants receive play a crucial role in an experiment and the wording of the experiment instructions should be carefully scrutinized before a report. Slightly different wording in instructions may lead to different participant responses. In a reported HCI study (Wallace et al., 1993), participants were instructed to complete the task "equally speedily as possible" nether one condition. Nether the other condition, participants were instructed to "accept your time, there is no rush." Interestingly, participants working under the no-time-stress condition completed the tasks faster than those nether the fourth dimension-stress status. This suggests the importance of critical diction in instructions. It also implies that the instructions that participants receive need to exist highly consistent. When a study is conducted nether the supervision of multiple investigators, it is more likely that the investigators give inconsistent instructions to the participants. Instructions and procedures on a written document or prerecorded instructions are highly recommended to ensure consistency across experimental sessions.

Many times, petty and unforeseen details introduce biases into the results. For example, in an experiment that studies information entry on a PDA, the way the PDA is physically positioned may have an bear upon on the results. If no specification is given, some participants may agree the PDA in one hand and enter data using the other hand, other participants may put the PDA on a tabular array and enter data using both hands. There are notable differences betwixt the ii conditions regarding the distance between the PDA screen and the participant's eyes, the angle of the PDA screen, and the number of easily involved for data entry. Whatsoever of those factors may introduce biases into the observed results. In gild to reduce the biases attributed to experimental procedures, we need to

randomize the order of weather, tasks, and task scenarios in experiments that adopt a within-grouping design or a divide-plot design;

prepare a written document with detailed instructions for participants;

fix a written document with detailed procedures for experimenters; and

run multiple pilot studies before bodily information drove to identify potential biases.

A airplane pilot written report is non a luxury that we conduct only when we take plenty of time or coin to spend. On the contrary, years of experience tells us that pilot studies are critical for all HCI experiments to place potential biases. No affair how well yous think you accept planned the study, there are always things that you overlook. A pilot study is the only chance you lot accept to fix your mistakes before yous run the main report. Pilot studies should be treated very seriously and conducted in exactly the same way as planned for the actual experiment. Participants of the pilot report should be from the target population. Having i or two members from the research team completing the designed tasks is not a pilot study in its true sense (Preece et al., 1994).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128053904000030

Monte Carlo Integration

Matt Pharr , ... Greg Humphreys , in Physically Based Rendering (Third Edition), 2017

13.9 Bias

Another approach to variance reduction is to introduce bias into the ciphering: sometimes knowingly computing an gauge that doesn't actually have an expected value equal to the desired quantity can nonetheless lead to lower variance. An calculator is unbiased if its expected value is equal to the right respond. If not, the deviation

β = East F f ten d x

is the amount of bias.

Kalos and Whitlock (1986, pp. 36–37) gave the following example of how bias can sometimes be desirable. Consider the problem of computing an guess of the mean value of a uniform distribution Xi   ~ p over the interval from 0 to 1. I could use the estimator

1 N i = 1 N 10 i ,

or one could use the biased estimator

1 2 max 10 1 Ten 2 10 N .

The first estimator is in fact unbiased but has variance with order O(Northward   one). The second reckoner's expected value is

0.5 N N + 1 0.5 ,

so it is biased, although its variance is O(N   2), which is much ameliorate.

The pixel reconstruction method described in Section vii.8 tin also be seen as a biased estimator. Considering pixel reconstruction equally a Monte Carlo interpretation problem, nosotros'd like to compute an estimate of

I x y = f 10 x , y y Fifty x y d x d y ,

where I (x, y) is a final pixel value, f (x, y) is the pixel filter part (which we assume here to be normalized to integrate to 1), and L(x, y) is the image radiance role.

Assuming we accept chosen prototype aeroplane samples uniformly, all samples have the same probability density, which nosotros will denote by pc . Thus, the unbiased Monte Carlo estimator of this equation is

I ten y one N p c i = one N f x ten i , y y i L x i y i .

This gives a different event from that of the pixel filtering equation nosotros used previously, Equation (7.12), which was

I ten y = i f ten x i , y y i L ten i y i i f x x i , y y i .

Yet, the biased estimator is preferable in practice because it gives a result with less variance. For example, if all radiance values L(xi , yi ) have a value of i, the biased estimator will ever reconstruct an image where all pixel values are exactly 1—clearly a desirable property. However, the unbiased computer will reconstruct pixel values that are not all i, since the sum

i f x 10 i , y y i

will generally not be equal to pc and thus volition have a different value due to variation in the filter function depending on the item (10i , yi ) sample positions used for the pixel. Thus, the variance due to this event leads to an undesirable result in the concluding image. Fifty-fifty for more complex images, the variance that would be introduced by the unbiased estimator is a more objectionable artifact than the bias from Equation (seven.12).

Read full affiliate

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9780128006450500130

Conducting a usability test

Carol M. Barnum , in Usability Testing Essentials (Second Edition), 2021

Avoid request "bad" questions

"Bad" questions are those that don't elicit useful information and those that introduce bias. Yous may exist tempted to ask the post-obit kinds of questions, but you should steer clear of them.

Don't ask users to exist designers—Unless they are designers past profession, they will likely struggle to evidence or tell how they think the pattern should be different. Elicit from them what doesn't work in terms of the design. For instance, If a user says, "I didn't see that option because it was on the right of the screen and I thought it was an ad," that comment tells you at that place is an issue with the design simply doesn't put the user in the position of designing a solution.

Don't ask users to consider a future situation—Questions that ask users to imagine a time to come situation in which they might practice something, such as use a feature of the production, put users in the awkward situation of trying to understand something that is not familiar to them. For instance, if you asked participants if they would want a new feature not still bachelor, most would likely say "yeah." Information technology would be better to inquire how they currently exercise something, rather than inquire if they would desire to exercise it the way your design is supporting.

Don't inquire leading questions—Leading questions innovate bias that tin can influence the style participants respond. Ane way to introduce bias is to attempt to interpret the feelings a user may be experiencing. For instance, yous don't want to say, "I noticed that you were frustrated with the way the interface responded. Tell me well-nigh that." In this case, you are suggesting a specific attitude that will likely influence how the participant responds.

For more on how to inquire skillful and avert "bad" questions, meet Spool's "3 Questions You lot Shouldn't Enquire During User Research," 2010; and Schade's "Fugitive Leading Questions to Become Meliorate Insights from Participants," 2017.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780128169421000071

Conducting Usability Sessions

Emily Geisen , Jennifer Romano Bergstrom , in Usability Testing for Survey Research, 2017

Probing Further

If a participant makes a vague remark, such as "That's disruptive," it may be unclear what the participant finds confusing. Rather than making assumptions—the diction? the layout? the task?—you can probe to get more than information.

Other scenarios that may crave probing farther are when a participant gives a short or inadequate response to an open-concluded question or when a participant says something you exercise non understand. Three generic means of probing further that will not introduce bias are:

Utilise echoing, which is only repeating the participant's last word or phrase back equally a question (e.g., "That'due south confusing?").

Inquire the participant, "Can you tell me more?"

Inquire the participant to provide an example. "You said that you think this feature would be useful. Tin can yous provide an example of when information technology would be useful?"

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128036563000075

Observational Studies: Overview

P.R. Rosenbaum , in International Encyclopedia of the Social & Behavioral Sciences, 2001

one.3 Central Issues

Without random assignment of treatments, treated and control groups may non have been comparable prior to treatment, so differing outcomes may reflect either biases from these pretreatment differences or else effects actually caused past the treatment. The difficulty in distinguishing handling effects from biases is the cardinal consequence of the absence of randomization.

A variable measured prior to treatment is non affected by the treatment and is chosen a covariate. A variable measured after treatment may have been affected by the treatment and is chosen an outcome. An analysis that does non carefully distinguish covariates and outcomes can innovate biases into the analysis where none existed previously.

A pretreatment divergence betwixt treated and control groups is called an overt bias if it is accurately measured in the data at hand, and information technology called a hidden bias if it is not measured. For example, if treated subjects are observed to be somewhat older than controls, and if age is recorded, then this is an overt bias. If treated subjects consumed more than illegal narcotics than controls, and if authentic records of this are not available, and so this is a hidden bias. Typically, overt biases are immediately visible in the information at manus, while hidden biases are a affair of concerned speculation and investigation.

In most observational studies, an endeavor is fabricated to remove overt biases using one or more than analytical methods, such equally matched sampling, stratification, or model-based adjustments such every bit covariance adjustment. Adjustments are discussed in Sect. two. Once this is accomplished, attention turns to addressing possible subconscious biases, including efforts to notice hidden biases and to study the sensitivity of conclusions to biases of plausible magnitude. Hidden biases are discussed in Sects. 3–five.

Read total chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B0080430767004836

Template-Driven Agent-Based Modeling and Simulation with CUDA

Paul Richmond , Daniela Romano , in GPU Computing Gems Emerald Edition, 2011

21.1.five Agent Communication and Transition Part Behavior Scripting

While agents perform independent actions, interagent communication is essential in the emergence of grouping beliefs. Communication between agents using the X-Machine notation within FLAME GPU is introduced through the utilise of letters stored in global message lists. Agent transition functions are able to both output and input messages that in the case of the latter, requires a message iteration loop for the agent to process message data. The utilise of only indirect bulletin-based communication between agents ensures that the scheduling of agents can in no way introduce bias or any other simulation artifacts based on the order of agent updates. Figures 21.4 and 21.5 show examples of 2 agent transition functions that demonstrate message output and a message input loop, respectively. The ordering of transition functions is used to ensure global synchronization of messages between consecutive transition functions, and as a result, a single-transition function can never perform both input and output of the same message type. Each amanuensis transition part performs the memory mapping of M to Thousand′ by updating the agent retention structure argument straight. Agent deaths can exist signaled by the return value flag by returning any value other than 0 (the flag tin and then be used to compact the working list of agents).

Effigy 21.4. An example of a scripted agent transition part demonstrating message output.

Figure 21.5. An instance of a scripted agent transition function showing message iteration through the template-generated custom message functions.

Integration of the transition functions within automatically generated simulation code is made possible by wrapping the transition functions with global kernels (generated through the XSLT templates) that are responsible for loading and storing agent data from the SoA format into registers. The custom bulletin functions (Figures 21.4 and 21.5) are also template generated, depending on the definition of a message inside the XML model file. The custom bulletin functions hide the same data-loading techniques as used for agent storage with each message having a structure and SoA definition consisting of a number of memory variables.

The use of message functions to hide the iteration of messages is specially advantageous considering it abstracts the underlying algorithms from the behavioral agent scripting. This allows the same functional syntax to exist used for a number of different advice techniques betwixt agents. The most full general of these techniques is that of brute-strength communication where an agent volition read every single message of a particular type. Technically, this is implemented through the utilize of tiled batching of messages into shared memory [8]. The bulletin iteration functions are responsible for performing this tiled loading into shared memory that occurs at the beginning of the iteration loop and after each message from within a group has been serially accessed. Figure 21.6 demonstrates how this is performed and shows the access pattern from shared retention for the iteration functions, especially when iteration through shared memory has been exhausted.

Effigy 21.6. Beast-force message grouping loading when requesting the first and side by side message (left). Brute-strength bulletin group loading when requesting the side by side message from a new message grouping (right).

In add-on to brute-forcefulness bulletin communication, the FLAME GPU templates provide both a spatially partitioned message iteration technique and a detached message partitioning technique. In the case of spatially partitioned messages, a ii-D or 3-D regular grid is used to sectionalization the amanuensis/message environment, depending on a prespecified message interaction range (the range in which agents read the message if it is used equally a input during a transition part). When nosotros use a parallel radix sort and texture cached lookups [vii], the message iteration loop can ensure far higher operation for limited range interactions. Inside discrete message advice, messages can be output simply past detached spaced agents (cellular automaton) with the message iteration functions operating by cycling through a fixed range in a discrete message filigree. For detached agents, the bulletin iteration through the detached filigree is accelerated by loading a single large bulletin block into shared memory. Interaction between continuous and discrete agents is possible by continuous agents using a texture enshroud implementation of detached message iteration.

Read full chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/commodity/pii/B9780123849885000218

Sentiment Analysis in Social Networks

E. Fersini , in Sentiment Analysis in Social Networks, 2017

5 Futurity Directions

In the previous sections the almost recent contributions to the country of the art of sentiment analysis were presented from a machine learning betoken of view. Concerning the time to come directions, some conclusions can be drawn. For the sentiment analysis methods focused on natural language, nosotros tin highlight the following:

The supervised models that are able to leverage natural language are strictly focused on explicit opinions. A claiming that remains to be addressed relates to the more than difficult chore of identifying and properly dealing with implicit opinions (ie, objective statements that express a desirable or undesirable fact through regular or comparative statements). In this management, not simply syntactic cues could contribute to identifying text constituents that characterize implicit opinions, just also the semantics of co-occurrent patters in the language could provide a distinctive advantage.

Regarding the futurity work on semisupervised models, a major challenge that remains to be addressed is related to incremental learning. While most of the bachelor techniques are based on statistical learning and therefore assume a given stochastic distribution of the data they discover, an incremental learning model could be applied whenever new observations emerge and could conform to what has been learned accordingly.

According to the analysis of the literature on unsupervised models , we can affirm that although they represent a relevant alternative to the supervised and semisupervised ones, they can innovate bias when dealing with short and noisy text. The fact that social network text is equanimous of a few words poses considerable bug when i is applying traditional topic/sentiment models. These models typically suffer from data sparsity to gauge robust word co-occurrence statistics when they are dealing with brusque and ill-formed text. We tin therefore expect as upcoming contributions several approaches able to adjust the generative procedure behind topic/sentiment modeling to the social network language.

For the sentiment assay methods focused on both natural language and relationships, we highlight the following:

As a time to come management of supervised models that are able to leverage both information sources, we can expect several additional extensions of probabilistic learning/inference techniques to deal with complex relational structures (ie, connections based both on status homophily and on value homophily). From a car learning point of view, we look an increasing number of investigations that try to create a successful marriage between probability theory and several relational representations. In particular, the solutions to acquire and infer over the relational surround of social networks are presumed to retain the relational data structure in its totality (ie, not focusing on direct connected users, but considering the whole network) and by adapting/enriching learning and/or inference algorithms to consider the real nature of the social networks.

Later on analyzing the land of the art of this type of semisupervised models, we believe a possible future inquiry direction relates to the uncertainty of relationships available in the social networks. The totality of the models (based on both status homophily and value homophily) presume sure relationships that practice not evolve over time. In a more than physical scenario, all of these connections are uncertain: they can be cleaved, they can vary over time and with topic, and they can be latent (not directly appreciable). We can therefore expect e'er richer models able to tackle the uncertainty over the relational structure to perform more accurate sentiment classification and propagation tasks.

Equally a future direction of unsupervised models, nosotros await an extension of propositional generative models (presented in Department three.1.3) for dealing with connections amid users and relationships among messages. From a machine learning perspective, we expect an increasing number of investigations into the statistical relational learning domain able to explicitly model the relational component into the generative topic-sentiment models.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128044124000061