======Good Methods References====== ==== Network Interview Design Effects ==== This page is for collecting references to methodological resources for network data collection designs to inform the use of EgoWeb. \\ \\ === Name Generators === \\ Bernard, H.R., E.C. Johnsen, P.D. Killworth, C. McCarty, G.A. Shelley, S. Robinson. [[http://www.sciencedirect.com/science/article/pii/037887339090005T|Comparing four different methods for measuring personal social networks. Social Networks]]. 1990. 12:179-215.\\ \\ Bidart C, Charbonneau J. [[http://fmx.sagepub.com/content/early/2011/05/02/1525822X11408513.full.pdf|How to Generate Personal Networks: Issues and Tools for a Sociological Perspective]]. Field Methods. 2011; 23(3):266-286.\\ \\ The debate on the limits and relevance of the different name generators comes with the development of social network studies. The core questions are: What are they supposed to construct? For what research question? Some procedures tend to choose a precise target with a unique name generator; others prefer to use a series of name generators. The authors discuss here some specificities and advantages of these methods for ego-centered networks. The authors then present the ‘‘contextual’’ name generator, which was developed in longitudinal qualitative panel studies in France and Quebec. This tool gives access to a great variety of information focused on sociological questions. Its original design differentiates two complementary stages to distinguish the global contexts-based network from specific resource-based networks. This tool remains flexible and may be adapted to different topics.\\ \\ Brashears M. [[https://sociologicalscience.com/download/volume%201/november/SocSci_v1_493to511.pdf|"Trivial" Topics and Rich Ties: The Relationship Between Discussion Topic, Alter Role, and Resource Availability using the "Important Matters" Name Generator]]. Sociological Science. 2014; 1:493-511.\\ \\ This paper uses a nationally representative dataset of discussion relationships to determine what Americans consider to be an important matter, whether some topics are predominantly discussed with certain types of associates, and if the topic of discussion or the role of the discussant predicts the availability of social support. Results indicate that some topics are pursued or avoided with particular types of alters, and that the role of the discussant, but not the topic of discussion, predicts the availability of support from our discussion partners. This implies that some differences in measured network structure may be due to variations in topics discussed, but that topic says little about the supportiveness of the tie once we are dealing with important matters discussants.\\ \\ Burt, RS. [[http://www.sciencedirect.com/science/article/pii/S0378873397000038|A note on social capital and network content]]. Social Networks. 1997; 19:355-373.\\ \\ As a guide to selecting name generators for social capital research, I use network data on a probability sample of heterogeneous senior managers to describe how they sort relations into kinds, and how the kinds vary in contributing to social capital. Managers sort relations on two dimensions of strength - intimacy (especially close versus distant) versus activity (frequent contact with new acquaintances versus rare contact with old friends) - and with respect to two contents - personal discussion (confiding and socializing relations) versus corporate authority (the formal authority of the boss and informal authority of essential buy-in). Comparing name generators for their construct validity as indicators of social capital, I compute network constraint from different kinds of relations, and correlate constraint with early promotion. The correlation is strong for the network of personal relations, zero for the network of authority relations, and strongest for personal and authority relations together. I close with research design recommendations for selecting name generators.\\ \\ Campbell, K.E. and B.A. Lee. [[http://www.sciencedirect.com/science/article/pii/037887339190006F|Name generators in surveys of personal networks]]. Social Networks. 1991; 13:203-221.\\ \\ To investigate the consequences of name generators for network data, we compare characteristics of egocentric networks from Wellman’s East York survey, Fischer’s Northern California Communities Study, the General Social Survey, and our study of networks in 81 Nashville, Tennessee neighborhoods. Network size, age and education heterogeneity, and average tie characteristics were most strongly affected by the name generator used. Network composition, and racial and sexual heterogeneity, were more invariant across different kinds of name generators.\\ \\ Ferligoj A, Hlebec V. [[http://www.sciencedirect.com/science/article/pii/S0378873399000076|Evaluation of social network measurement instruments]]. Social Networks. 1999; 21:111-130.\\ \\ This paper evaluates the reliability and validity of network measurement instruments for measuring social support. The authors present and discuss the results from eight experiments which were designed to analyze the quality of four measurement scales: binary, categorical, categorical with labels, and line production, as well as two measurement techniques for listing alters (free recall and recognition). Reliability and validity were estimated by the true score multitrait–multimethod (MTMM) approach. Meta-analysis of factors affecting the reliability and the validity of network measurement was done by multiple classification analysis MCA. The results show that the binary scale and the first presentation of measurement instruments are the least reliable. Surprisingly, the two data collection techniques free recall and recognition yield equally reliable data.\\ \\ Hlebec, V, Ferligoj, A. [[http://journals.sagepub.com/doi/pdf/10.1177/15222X014003003|Reliability of Social Network Measurement Instruments]]. Field Methods. 2002; 14(3):288-306.\\ \\ This article evaluates the quality of instruments for measuring support in social networks. The authors discuss the results of ten experiments designed to analyze the reliability of five measurement scales as well as two measurement methods for listing alters (free recall and recognition), type of network question (original, reciprocated), and characteristics of study design (time between instrument presentations). Analysis shows that the binary scale and the first presentation of measurement instruments are the least reliable. The most reliable were ordinal scales, among which the five-category ordinal scale with labels was the most reliable. The two data collection methods (free recall and recognition) and the two types of network questions (original, reciprocated) yield equally reliable data.\\ \\ Hsieh YP. [[http://www.sciencedirect.com/science/article/pii/S0378873314000744|Check the phone book: Testing information and communication technology (ICT) recall aids for personal network surveys]]. Social Networks. 2015; 41:101-112.\\ \\ This study tested two recall aids for the name generator procedure via a randomized web experiment with 447 college students, eliciting their personal networks. Compared to participants solely presented with the name generator, participants being prompted and probed to consult records saved in their communication devices provided more comprehensive network data and more weak ties. Furthermore, these data were garnered without either a substantial increase in item nonresponse or a decrease in completion time for subsequent name interpreters. Thus, ICT recall aids are deemed cost-effective and context-neutral techniques to improve the recall accuracy of data collected by the name generator.\\ \\ Kogovsek, T, Mrzel, M, Hlebec, V.[[http://search.proquest.com/openview/62a1a2c0b1f18b1f3115d2c4fed7705d/1?pq-origsite=gscholar| "Please Name the First Two People you Would Ask for Help": The Effect of Limitation of the Number of Alters on Network Composition]]. Metodoloski zvezki. 2010; 7(2):95-106.\\ \\ Social network indicators (e.g., network size, network structure and network composition) and the quality of their measurement may be affected by different factors such as measurement method, type of social support, limitation of the number of alters, context of the questionnaire, question wording, personal characteristics of respondents such as age, gender or personality traits and others.\\ In this paper we focus on the effect of limiting the number of alters on network composition indicators (e.g., percentage of kin, friends etc.), which are often used in substantive studies on social support, epidemiological studies and so on. Often social networks are only one among many topics measured in such large studies; therefore, limitation of the number of alters that can be named is often used directly (e.g., International Social Survey Programme) or indirectly (e.g., General Social Survey) in the network items.\\ The analysis was done on two comparable data sets from different years. Data were collected by the name generator approach by students of the University of Ljubljana as part of various social science methodology courses. Network composition on the basis of direct use (i.e., already in the question wording) of limitation on the number of alters is compared to network composition on full network data (i.e.,collected without any limitations).\\ \\ Marin A. [[http://www.sciencedirect.com/science/article/pii/S0378873304000292|Are respondents more likely to list alters with certain characteristics?: Implications for name generator data]]. Social Networks. 2004;26(4):289-307.\\ \\ Analyses of egocentric networks make the implicit assumption that the list of alters elicited by name generators is a complete list or representative sample of relevant alters. Based on the literature on free recall tasks and the organization of people in memory, I hypothesize that respondents presented with a name generator are more likely to name alters with whom they share stronger ties, alters who are more connected within the network, and alters with whom they interact in more settings. I conduct a survey that presents respondents with the GSS name generator and then prompts them to remember other relevant alters whom they have not yet listed. By comparing the alters elicited before and after prompts I find support for the first two hypotheses. I then go on to compare network-level measures calculated with the alters elicited by the name generator to the same measures calculated with data from all alters. These measures are not well correlated. Furthermore, the degree of underestimation of network size is related to the networks’ mean closeness, density, and mean duration of relationships. Higher values on these variables result in more accurate estimation of network size. This suggests that measures of egocentric network properties based on data collected using a single name generator may have high levels of measurement error, possibly resulting in misestimation of how these network properties relate to other variables.\\ \\ Marin A, Hampton KN. [[http://fmx.sagepub.com/content/19/2/163.full.pdf|Simplifying the Personal Network Name Generator: Alternatives to Traditional Multiple and Single Name Generators]]. Field Methods. 2007; 19(2):163-193.\\ \\ Researchers studying personal networks often collect network data using name generators and name interpreters. We argue that when studying social support, multiple name generators ensure that researchers sample from a multidimensional definition of support. However, because administering multiple name generators is time consuming and strains respondent motivation, researchers often use single name generators. We compared network measures obtained from single generators to measures obtained from a six-item multiple-name generator. Although some single generators provided passable estimates of some measures, no single generator provided reliable estimates across a broad spectrum of network measures. We then evaluated two alternative methods of reducing respondent burden: (1) the MMG, a multiple generator using the two most robust name generators and (2) the MGRI, a six-item name generator with name interpreters administered for a random subset of alters. Both the MMG and the MGRI were more reliable than single generators when measuring size, density, and mean measures of network composition or activity, though some single name generators were more reliable for measures consisting of sums or counts.\\ \\ McCallister, L, Fischer, CS. [[http://journals.sagepub.com/doi/abs/10.1177/004912417800700202|A Procedure for Surveying Personal Networks]]. Sociological Methods & Research. 1978; 7(2):131-148\\ \\ The application of network analysis to certain issues in sociology requires measurement of individuals’ personal networks. These issues generally involve the impact of structural locations on persons’ social lives. One such case is the Northern California Community Study of the personal consequences of residential environments. This article describes and illustrates the methodology we have developed for studying personal networks by mass survey. It reviews the conceptual problems in network definition and measurement, assesses earlier efforts, presents our technique, and illustrates its applications.\\ \\ Pustejovsky JE, Spillane JP. [[http://www.sciencedirect.com/science/article/pii/S0378873309000318|Question-order effects in social network name generators]]. Social Networks. 2009; 31:221-229.\\ \\ Social network surveys are an important tool for empirical research in a variety of fields, including the study of social capital and the evaluation of educational and social policy. A growing body of methodological research sheds light on the validity and reliability of social network survey data regarding a single relation, but much less attention has been paid to the measurement of multiplex networks and the validity of comparisons among criterion relations. In this paper, we identify ways that surveys designed to collect multiplex social network data might be vulnerable to question-order effects. We then test several hypotheses using a split-ballot experiment embedded in an online multiple name generator survey of teachers’ advice networks, collected for a study of complete networks. We conclude by discussing implications for the design of multiple name generator social network surveys.\\ \\ Reza Yousefi-Nooraie, Alexandra Marin, Robert Hanneman, Eleanor Pullenayegum, Lynne Lohfeld, Maureen Dobbins. [[http://journals.sagepub.com/doi/abs/10.1177/0049124117701484|The Relationship Between the Position of Name Generator Questions and Responsiveness in Multiple Name Generator Surveys]]. Sociological Methods & Research. 2017. E-pub\\ \\ Using randomly ordered name generators, we tested the effect of name generators’ relative position on the likelihood of respondents’ declining to respond or satisficing in their response. An online survey of public health staff elicited names of information sources, information seekers, perceived experts, and friends. Results show that when name generators are asked later, they are more likely to go unanswered and respondents are more likely to respond that they do not know anyone or list fewer names. The effect of sequence was not consistent in different question types, which could be the result of the moderating effect of willingness to answer and question sensitivity.\\ \\ Shakya HB, Christakis NA, Fowler JH. [[http://www.sciencedirect.com/science/article/pii/S0378873316303495|An exploratory comparison of name generator content: Data from rural India]]. Social Networks. 2017;48:157-168.\\ \\ Since the 1970s sociologists have explored the best means for measuring social networks, although few name generator analyses have used sociocentric data or data from developing countries, partly because sociocentric studies in developing countries have been scant. Here, we analyze 12 different name generators used in a sociocentric network study conducted in 75 villages in rural Karnataka, India. Having unusual sociocentric data from a non-Western context allowed us to extend previous name generator research through the unique analyses of network structural measures, an extensive consideration of homophily, and investigation of status difference between egos and alters. We found that domestic interaction questions generated networks that were highly clustered and highly centralized. Similarity between respondents and their nominated contacts was strongest for gender, caste, and religion. We also found that domestic interaction name generators yielded the most homogeneous ties, while advice questions yielded the most heterogeneous. Participants were generally more likely to nominate those of higher social status, although certain questions, such as who participants talk to uncovered more egalitarian relationships, while other name generators elicited the names of social contacts distinctly higher or lower in status than the respondent. Some questions also seemed to uncover networks that were specific to the cultural context, suggesting that network researchers should balance local relevance with global generalizability when choosing name generators.\\ \\ Straits BC. [[http://www.sciencedirect.com/science/article/pii/S0378873300000186|Ego's important discussants or significant people: An experiment in varying the wording of personal network name generators]]. Social Networks. 2000; 22:123-140.\\ \\ There is considerable disagreement about the best personal network name generator to employ when only a single question is practical. One general approach is to ask the respondents egos. to delineate the core members alters. of their personal networks according to affective criteria e.g., ‘‘the most significant people in your life’’.. Another approach provides more guidance to the egos by asking about alters with whom they have had specific interactions or social exchanges e.g., ‘‘discuss important matters’’. Finally, most name generators have been criticized for their preoccupation with positive ties to the exclusion of the difficult or negative relationships that may be an important part of ego’s social world. An experiment 2=2 factorial design was embedded within an interviewer-administered survey of 426 college students to explore the effects on reported network size and composition of a. varying the delineation criteria ‘‘significant people’’ or the 1985 General Social Survey GSS. ‘‘important matters’’ and b. including or excluding a probe for negative interactions ‘‘These may include people that sometimes make you angry or upset’’. The name-generator wording manipulations produced modest network compositional differences ego–alter role relationships and discussion topics. that varied by the sex of both egos and their alters. Compared to the ‘‘important matters’’ criterion, the ‘‘significant people’’ generator elicited slightly more cross-sex relatives and fewer same-sex close friends and co-workers from female but not male. respondents. The negative probe produced some statistically significant but substantively unimportant compositional differences. The results suggest that major differences in name-generator wording may in some situations have little or no effect on reported egocentric networks.\\ \\ === Name interpreters === \\ Kogovsek, T, Ferligoj, A. [[http://www.sciencedirect.com/science/article/pii/S0378873305000110|Effects on reliability and validity of egocentered network measurements]]. Social Networks. 2005; 27:205-229.\\ \\ This paper examines the reliability and validity of egocentered networks. Reliability and validity are estimated by the multitrait-multimethod (MTMM) approach. A split ballot MTMM design [Saris, W.E., 1999. Forced choice or agree/disagree questions? An evaluation by the split ballot MTMM experiment. In: Proceeding of the Meeting of the IRMCS, pp. 122–146; Kogovˇsek, T., Ferligoj, A., Coenders, G. Saris,W. E., 2002. Estimating the reliability and validity of personal support measures: full information ml estimation with planned incomplete data. Social Networks 24, 1–20] is used, in which separate groups of respondents received different combinations of two methods. The effect of factors such as the methods used and the personal characteristics of respondents that can affect the quality of data was estimated by a meta analysis.\\ Measurement method, type of question, network size, age, gender, extraversion and emotional stability all had statistically significant effects on the validity of measurement. After the list of alters is obtained with name generators, name interpreter questions can be asked in two ways. One way (“by alters”) is to take each alter individually and to ask all the questions about him/her, going alter by alter until the end of the list of alters. The other way (“by questions”) is to take the question and ask this question for all alters on the list, going question by question until the end of the list of name interpreter questions. Telephone interviewing (both by alters and by questions) gave more valid measurements than face-to-face interviews.\\ Behavioral questions were more valid than questions with emotional content. The characteristics of ties were more validly measured in smaller networks. With reference to respondents’ personal characteristics younger respondents, men, extraverted and emotionally stable respondents all had more valid measurements. Reliability was significantly affected by the measurement method, the type of question and age. The telephone/by alters method was the most reliable measurement method. Behavioral questions were more reliable than questions with emotional content. Measurements among younger respondents were also more reliable.\\ \\ Thaden, LL, Rotolo, T. [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.211.4342&rep=rep1&type=pdf|The Measurement of Social Networks: A Comparison of Alter-Centered and Relationship-Centered Survey Designs]]. Connections. 2009; 29(1):15-25. [NOT STRICTLY EGOCENTRIC]\\ \\ Utilizing two surveys administered to a classroom of college students, this study explores differences in social network measures based on survey instrument design. By administering both a relationship-centered survey and an alter-centered survey, we analyze differences in range, mean numbers of relationships, network centralization, and network density. Nonparametric tests are also used to discern patterns of similarity and difference. We find that measurement differences are often negligible when asking about extremely close relationships like friendship. However, differences often appear when studying “weak tie” types of relationships such as recognition of classmate names or acquaintances.\\ \\ ===Position generators=== \\ Hallsten, M, Edling, C, Rydgren, J. [[http://www.sciencedirect.com/science/article/pii/S0378873314000355|The effects of specific occupations in position generator measures of social capital]]. Social Networks. 2015; 40:55-63.\\ \\ The position generator is a widespread method for measuring latent social capital in which respondents are queried about contacts on a list of occupations predefined by the analyst. We separate out the unique contribution of each occupation to aggregated measures of social capital. It turns out that this contribution varies vastly: knowing a person in some occupations provides substance to measures of social capital,while knowing a person in a few occupations is irrelevant and contributes statistical noise and causes attenuation bias. We discuss the implication of our findings for the design of position generator measures generally.\\ \\ van der Gaag, M, Appeljof, GJ, Webber, M. [[https://www.researchgate.net/profile/Martin_Van_Der_Gaag/publication/264286881_Ambiguities_in_responses_to_the_Position_Generator/links/53d782cf0cf228d363eb118e.pdf|Ambiguities in responses to the Position Generator]]. Published in Italian as Ambiguita nelle risposte al position generator. Sociologia e Politiche Sociali. 2012; 15(2):113-141.\\ \\ The Position Generator is a popular measurement instrument for individual level social capital (Nan Lin & Dumin, 1986; Nan Lin, Yang-Chi Fu, & R.-M. Hsung, 2001). Empirical studies have tested or discussed measurement properties of the instrument, but not the underlying response process. In 35 semistructured cognitive interviews across gender, education, and age groups, we asked respondents to reflect on the 1999/2000 Social Survey of the Dutch (SSND) Position Generator. Effects found were unfamiliarity with occupations, interpretation of occupations, unknown occupations of alters, forcing alters into occupations, speculation, forgetting single alters and groups of alters, but not detectable misrepresentation of alters. In only 6 interviews all alters were working in a paid job (as the PG assumes); most remarkable alternatives were retired, unemployed, or deceased alters. An overall impression of the responses is that recalling alters to fit occupations feels counterintuitive to how relationships are memorized. Item validity and reliability are therefore likely to be negatively affected, but whether all combined ambiguities affect social capital measures is difficult to predict. Yet, an underestimation of social capital seems likely. Implications and ideas for future development of PGs are discussed.\\ \\ Verhaeghe PP, Van de Putte B, Roose H. [[http://fmx.sagepub.com/content/25/3/238.full.pdf|Reliability of Position Generator Measures across Different Occupational Lists: A Parallel Test Experiment]]. Field Methods. 2012; 25(3):238-261.\\ \\ The position generator is a widely used research tool to measure individual social capital. Although the position generator is said to be reliable, there are only a few broad guidelines to construct the instrument and there is no standard list of occupational items. Furthermore, the reliability of the position generator across different occupational lists has not yet been tested. This article examines the reliability of 13 position generator measures across different occupational lists by means of a parallel test experiment. We found that only the volume measure has good reliability. Reliability of the social class–based position generator measures is fair to good, and reliability of the occupational prestige/status–based position generator measures is poor. These latter types of measures are the ones most often used.\\ \\ === Resource generators === \\ Van Der Gaag M, Snijders TAB. [[http://www.sciencedirect.com/science/article/pii/S0378873304000607|The Resource Generator: social capital quantification with concrete items. Social Networks]]. 2005; 27:1-29.\\ \\ In research on the social capital of individuals, there has been little standardisation of measurement instruments, and more emphasis on measuring social relationships than on social resources. In this paper, we propose two innovations. First, a new measurement method: the Resource Generator; an instrument with concretely worded items covering ‘general’ social capital in a population, which combines advantages of earlier techniques. Construction, use, and first empirical findings are discussed for a representative sample (N=1004) of the Dutch population in 1999–2000. Second, we propose to investigate social capital by latent trait analysis, and we identify four separately accessed portions of social capital: prestige and education related social capital, political and financial skills social capital, personal skills social capital, and personal support social capital. This underlines that social capital measurement needs multiple measures, and cannot be reduced to one total measure of indirectly ‘owned’ resources. Constructing a theory-based Resource Generator is a challenge for different contexts of use, but also retrieves meaningful information when investigating the productivity and goal specificity of social capital.\\ \\ Webber MP, Huxley PJ. [[http://www.sciencedirect.com/science/article/pii/S0277953607001578|Measuring access to social capital: The validity and reliability of the Resource Generator-UK and its association with common mental disorder]]. Social Science and Medicine. 2007; 65:481-492.\\ \\ Resource generators measure an individual's access to social resources within their social network. They can facilitate the analysis of how access to these resources may assist recovery from illness. As these instruments are culture and context dependent different versions need to be validated for different populations. Further, they are yet to be subjected to a thorough content validation and their reliability and validity have not been established beyond an examination of their internal scales. This paper reports the validity and reliability of a version suitable for general population use in the UK. Firstly, a qualitative process of item selection and review through focus groups and an expert panel ensured that the resource items were relevant. Also, cognitive interviews identified any significant problems prior to extensive piloting. Then we examined its internal domains using Mokken scaling in a small general population survey (n=295). Its concurrent validity with a similar instrument was tested in a further pilot (n=335) and these findings were supported by a known-group validity study (n=65). Its reliability was established in a test–retest study (n=47) in addition to an examination of the reliability coefficients of the internal scales. We found that the Resource Generator-UK has good psychometric properties, though there is some variation in performance between items and scales. Further, we found an inverse relationship with common mental disorder in the second pilot we undertook.\\ \\ === Visualizations in Network Data Collection === \\ Freeman, LC. [[https://www.cmu.edu/joss/content/articles/volume1/Freeman.html|Visualizing Social Networks. Journal of Social Structure.]] 2000; 1(1):1-20.\\ \\ Hogan, B, Carrasco, JA, Wellman, B.[[http://journals.sagepub.com/doi/pdf/10.1177/1525822X06298589| Visualizing Personal Networks: Working with Participant-aided Sociograms]]. Field Methods. 2007; 19(2):116-144.\\ \\ We describe an interview-based data-collection procedure for social network analysis designed to aid gathering information about the people known by a respondent and reduce problems with data integrity and respondent burden. This procedure, a participant-aided network diagram (sociogram), is an extension of traditional name generators. Although such a diagram can be produced through computer-assisted programs for interviewing (CAPIs) and low technology (i.e., paper), we demonstrate both practical and methodological reasons for keeping high technology in the lab and low technology in the field. We provide some general heuristics that can reduce the time needed to complete a name generator. We present findings from our Connected Lives field study to illustrate this procedure and compare to an alternative method for gathering network data.\\ \\ Hogan B, Melville JR, Philips GL, Janulis P, Contractor N, Mustanski BS, Birkett M. [[http://s3.amazonaws.com/academia.edu.documents/44947382/chi.pdf?AWSAccessKeyId=AKIAJ56TQJRTWSMTNPEA&Expires=1475268398&Signature=rbG24Q9bd15jmO%2Ft5UmF5ol3NU8%3D&response-content-disposition=inline%3B%20filename%3DEvaluating_the_Paper-to-Screen_Translati.pdf|Evaluating the Paper-to-Screen Translation of Participant-Aided Sociograms with High-Risk Participants]]. In //Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems//, pp. 5360-5371. ACM, 2016. DOI: http://dx.doi.org/10.1145/2858036.2858368\\ \\ While much social network data exists online, key network metrics for high-risk populations must still be captured through self-report. This practice has suffered from numerous limitations in workflow and response burden. However, advances in technology, network drawing libraries and databases are making interactive network drawing increasingly feasible. We describe the translation of an analog-based technique for capturing personal networks into a digital framework termed netCanvas that addresses many existing shortcomings such as: 1) complex data entry; 2) extensive interviewer intervention and field setup; 3) difficulties in data reuse; and 4) a lack of dynamic visualizations. We test this implementation within a health behavior study of a high-risk and difficult-to-reach population. We provide a within–subjects comparison between paper and touchscreens. We assert that touchscreen-based social network capture is now a viable alternative for highly sensitive data and social network data entry tasks.\\ \\ Schiffer, E, Hauck, J. [[http://journals.sagepub.com/doi/pdf/10.1177/1525822X10374798|Net-Map: Collecting Social Network Data and Facilitating Network Learning through Participatory Influence Network Mapping]]. Field Methods. 2010; 22(3):231-249.\\ \\ The authors describe how to use Net-Map,1 a low-tech, low-cost, interview-based mapping tool that can be used by researchers, facilitators, and implementers to (1) visualize implicit knowledge and understand the interplay of complex formal and informal networks, power relations, and actors’ goals; (2) uncover sources of conflicts as well as potentials for cooperation; (3) facilitate knowledge exchange and learning processes; and (4) develop visions and strategies to achieve common goals. The authors show that the tool can produce both qualitative and quantitative data to increase network understanding by going beyond the purely structure-driven approach of social network analysis (SNA) and combine structural measures with measures of attributes of actors, especially concerning their perceived influence and their goals. The authors present experiences from a field study from Ghana to illustrate the procedure and briefly discuss possible applications of the tool.\\ \\ Stark TH, Krosnick JA. [[http://www.sciencedirect.com/science/article/pii/S0378873316300284|Gensi: A new graphical tool to collect ego-centered network data]]. Social Networks. 2017;48:36-45.\\ \\ This study (1) tested the effectiveness of a new survey tool to collect ego-centered network data and (2) assessed the impact of giving people feedback about their network on subsequent responses. The new tool, GENSI (Graphical Ego-centered Network Survey Interface), allows respondents to describe all network contacts at once via a graphical representation of their networks. In an online experiment, 434 American adults were randomly assigned to answer traditional network questions or GENSI and were randomly assigned to receive feedback about their network or not. The traditional questionnaire and GENSI took the same amount of time to complete, and measurements of racial composition of the network showed equivalent convergent validity in both survey tools. However, the new tool appears to solve what past researchers have considered to be a problem with online administration: exaggerated numbers of network connections. Moreover, respondents reported enjoying GENSI more than the traditional tool. Thus, using a graphical interface to collect ego-centered network data seems to be promising. However, telling respondents how their network compared to the average Americans reduced the convergent validity of measures administered after the feedback was provided, suggesting that such feedback should be avoided.\\ \\ Tubaro P, Casilli AA, Mounier L. [[http://fmx.sagepub.com/content/early/2013/07/11/1525822X13491861.full.pdf|Eliciting Personal Network Data in Web Surveys through Participant-generated Sociograms]]. Field Methods. 2014; 26(2):107-125.\\ \\ The article presents a method to elicit personal network data in Internet surveys, exploiting the renowned appeal of network visualizations to reduce respondent burden and risk of dropout. It is a participant-generated computer-based sociogram, an interactive graphical interface enabling participants to draw their own personal networks with simple and intuitive tools. In a study of users of websites on eating disorders, we have embedded the sociogram within a two-step approach aiming to first elicit the broad ego network of an individual and then to extract subsets of issue-specific support ties. We find this to be a promising tool to facilitate survey experience and adaptable to a wider range of network studies.\\ \\ ===Using the Web/computer-assisted to collect ego nets=== \\ Coromina, L, Coenders, G. [[http://www.sciencedirect.com/science/article/pii/S0378873305000493|Reliability and validity of egocentered network data collected via web. A meta-analysis of multilevel multitrait multimethod studies]]. Social Networks. 2006; 28:209-231.\\ \\ Our goal in this article is to assess reliability and validity of egocentered network data collected through web surveys using multilevel confirmatory factor analysis under the multitrait multimethod approach. In this study, we analyze a questionnaire of social support of Ph.D. students in three European countries. The traits used are the frequency of social contact questions. The methods used are web survey design variants. We consider egocentered network data as hierarchical; therefore, a multilevel analysis is required. Within and between-ego reliabilities and validities are defined and interpreted. Afterwards, we proceed to a meta-analysis of the results of the three countries where within and between-ego validities and reliabilities are predicted from survey design variables which have to do with question order (by questions or by alters), response category labels (end labels or all labels) and lay-out of the questionnaire (graphical display or plain text). Results show that question order by questions, all-labeled response categories and a graphical display lay-out with images lead to a better data quality. Our basic approach consisting on multilevel and meta-analysis can be applied to evaluate the quality of any type of egocentered network questionnaire, regardless of the data collection mode.\\ \\ Gerich J, Lehner R. Collection of Ego-Centered Network Data with Computer-Assisted Interviews. Methodology. 2006; 2(1):7-15.\\ \\ Although ego-centered network data provide information that is limited in various ways as compared with full network data, an egocentered design can be used without the need for a priori and researcher-defined network borders. Moreover, ego-centered network data can be obtained with traditional survey methods. However, due to the dynamic structure of the questionnaires involved, a great effort is required on the part of either respondents (with self-administration) or interviewers (with face-to-face interviews). As an alternative, we will show the advantages of using CASI (computer-assisted self-administered interview) methods for the collection of ego-centered network data as applied in a study on the role of social networks in substance use among college students.\\ \\ Hampton KN. [[http://www.jstor.org/stable/24359896?seq=1#page_scan_tab_contents|Computer-assisted interviewing: The design and application of survey software to the Wired Suburb Project]]. Bulletin de Methodologie Sociologique. 1999; 62:49-68.\\ \\ This paper explores the use of Internet and personal computer-based interviewing in the University of Toronto's Wired Suburb (Netville) Project. The use of computer-assisted Interviewing {CAI) in this project differs from other examples in its use of social network questions, of a time-use diary, and of Internet (Web) and personal computer (PC) based interviewing of a small residential population. The purpose of this paper is to develop an understanding of why CAI, specifically Computer Assisted Personal Interviewing (CAPI) and Computerized Self-Administered Interviewing (CSAI), may be more appropriate for some research projects than others, to explore specific problems with the technology and approach used in this study, and to explore specific challenges for the use of CAI in social network and time-use analysis.\\ \\ Lozar Manfreda K, Vehovar V, Hlebec V. [[http://search.proquest.com/openview/3e1d238ac45983700cabd9399bc8fdd7/1?pq-origsite=gscholar|Collecting Ego-centred Network Data via the Web]]. Metodoloski zvezki. 2004; 1(2):295-321.\\ \\ One trial in the collection of ego-centred networks via the Web was performed during the annual RIS (Research on Internet in Slovenia) Web survey conducted by the Faculty of Social Sciences, University of Ljubljana. Respondents were randomly split into four groups. Each group received a name generator for one type of social support: material, informational, emotional support or social companionship. Each respondent also received a set of questions for each alter they named in the network generator. Data collection was carried out between June and October 2001. The quality of the data was studied with respect to the number of listed alters and by two question wording forms for name generators. The analysis shows that the Web can be used as a data collection method for ego-centred social networks. However, special attention is required when designing the graphic layout of name generators as well as with the wording of instructions. In particular, the number of alters should be limited in some way, since respondents who name many alters tend to quit the questionnaire before answering additional questions regarding these alters.\\ \\ Matzat U, Snijders C. [[http://www.sciencedirect.com/science/article/pii/S0378873309000483|Does the online collection of ego-centered network data reduce data quality? An experimental comparison]]. Social Networks. 2010; 32:105-111.\\ \\ We analyze whether differences in kind and quality of ego-centered network data are related to whether the data are collected online or offline. We report the results of two studies. In the first study respondents could choose between filling out ego-centered data through a web questionnaire and being probed about their network in a personalized interview. The second study used a design in which respondents were allocated at random to either online or offline data collection. Our results show that the data quality suffers from the online data collection and the findings indicate that this is the consequence of the respondents answering “mechanically”. We conclude that network researchers should avoid to simply copy traditional network items into a web questionnaire. More research is needed about how new design elements specific for web questionnaires can motivate respondents to fill out network questions properly.\\ \\ Vehovar V, Lozar Manfreda K, Koren G, Hlebec V. [[http://www.sciencedirect.com/science/article/pii/S0378873308000142|Measuring ego-centered social networks on the web: Questionnaire design issues]]. Social Networks. 2008; 30:231-222.\\ \\ Collecting survey data on ego-centered social networks is a difficult task, owing to the complex questionnaire format. Usually, the interviewer handles the dynamics of the question–answer exchange, motivates the respondent and ensures the proper recording of the data. Self-administered modes of data collection, especially web data collection, are more problematic, as the respondents are left alone with a complex and burdensome questionnaire. Therefore, questionnaire layout is crucial for ensuring cooperation and data quality. In this paper we examined three key components of the corresponding web questionnaire: the number of name boxes using a single name generator, question format for assessing alter characteristics (i.e. alter-wise vs. question-wise) and number of name interpreters (i.e. alter characteristics). The number of name boxes was found to be essential for the reported size of social networks and also for some aspects of data quality. Specific data quality effects were also found with respect to variations in question format, where question-wise format performed better than alter-wise. The number of name interpreters had a relatively minor effect. Suggestions for possible standardization of the web interface layout are also given, so that equivalence with other data collection modes can be established.\\ \\ ===Using the Web/computer-assisted to collect ego nets - new tools== \\ Lackaff D. [[https://www.researchgate.net/profile/Derek_Lackaff/publication/250309312_New_opportunities_in_personal_network_data_collection/links/00b4951eac7559aa8e000000.pdf|New opportunities in personal network data collection]]. In M. Zacarias & J.V. de Oliveira (Eds.): Human-Computer Interaction, SCI 396, pp. 389–407. 2012.\\ \\ One of the central challenges of ego-centric or personal social network research is the quantity and quality of data that is required from research participants. In general, collecting data about increasingly larger ego-centric networks places an increasing burden on respondents. However, the recent development and increasing ubiquity of web applications that rely on social graphs present interesting new opportunities and challenges for data collection efforts. This chapter addresses this emerging context for social research, and reports the results of an experimental evaluation of an online computer-assisted self interview (CASI) survey tool called PASN (Propitious Aggregation of Social Networks). Personal networks acquired via the PASN tool were found to be larger and more diverse than those produced using standard survey methods, yet required significantly lower time investments from participants. The implications of new methods such as PASN for social network research are discussed, along with considerations and recommendations for future research.\\ \\ Ricken ST, Schuler RP, Sukeshini AG, Jones Q. [[http://s3.amazonaws.com/academia.edu.documents/41991806/01-14-11.pdf?AWSAccessKeyId=AKIAJ56TQJRTWSMTNPEA&Expires=1475268982&Signature=VYxqdZs2hlYTHOBnqGfeKcT6Y%2FQ%3D&response-content-disposition=inline%3B%20filename%3DTellUsWho_Guided_Social_Network_Data_Col.pdf|//TellUsWho//: Guided Social Network Data Collection]]. Proceedings of the 43rd Hawaii International Conference on System Sciences. 2010; pp. 1-10.\\ \\ Significant gaps exist in our knowledge of real world social network structures, which in turn limit our understanding of how to design social software. One important reason for this has been that researchers have not been able to systematically probe individuals in sufficient detail about ‘who’ and ‘how’ they interact with in the social networks they wish to study. To address this shortcoming we designed TellUsWho, a web-based social network survey tool. We explored the tool’s utility by studying the social ties of 141 students. TellUsWho supported the collection of rich social network data in a relatively short time period. Within an average of 34 minutes, respondents were able to describe their egocentric ties with people they regularly keep in contact. On average, respondents listed 42 alters, for each of which they answered 27 questions, resulting in 1134 responses. This compares favorably to traditional methods, which could require up to 15 hours per subject.\\ \\ === Respondent Burden Reduction === == Limiting Number of Alters == T **Kogovsek**, M Mrzel, V Hlebec [[http://search.proquest.com/openview/62a1a2c0b1f18b1f3115d2c4fed7705d/1?pq-origsite=gscholar&cbl=1396367|"Please Name the First Two People you Would Ask for Help": The Effect of Limitation of the Number of Alters on **Network** Composition]]. Advances in Methodology & Statistics / Metodoloski zvezki, 7(2), 95-106.\\ \\ Social network indicators (e.g., network size, network structure and network composition) and the quality of their measurement may be affected by different factors such as measurement method, type of social support, limitation of the number of alters, context of the questionnaire, question wording, personal characteristics of respondents such as age, gender or personality traits and others. In this paper we focus on the effect of limiting the number of alters on network composition indicators (e.g., percentage of kin, friends etc.), which are often used in substantive studies on social support, epidemiological studies and so on. Often social networks are only one among many topics measured in such large studies; therefore, limitation of the number of alters that can be named is often used directly (e.g., International Social Survey Programme) or indirectly (e.g., General Social Survey) in the network items. The analysis was done on two comparable data sets from different years. Data were collected by the name generator approach by students of the University of Ljubljana as part of various social science methodology courses. Network composition on the basis of direct use (i.e., already in the question wording) of limitation on the number of alters is compared to network composition on full network data (i.e.,collected without any limitations).\\ \\ Holland, PW, Leinhardt, S. [[http://www.tandfonline.com/doi/abs/10.1080/0022250X.1973.9989825|The structural implications of measurement error in sociometry.]] The Journal of Mathematical Sociology. 1973; 3(1):85-111.\\ \\ Measurement error, an inherent quality of any empirical data collection technique, is discussed in the context of sociometric data. These data have long been assumed to possess face validity and to be the data of choice in any study of the sentiment structure of small scale social systems. However, it is argued that while methods of sociometric analysis have become increasingly more sophisticated they have failed to yield unequivocal results because they do not distinguish structural complexity from measurement error. Through a discussion of increasingly more complex examples the distortion laden character of most sociometric data is illustrated. This distortion is introduced by the formalities of the sociometric test and it will not be removed by developing increasingly more sophisticated structural models or throwing out some of the data. Instead, when issues concerning the nature of specific relational networks are raised data of much higher quality than those which are commonly available are required. A technique for generating high quality sociometric data is briefly discussed. On the other hand, it is suggested that the extant body of sociometric data ought to be adequate when sizeable aggregations are examined for evidence of statistical tendencies in structure.\\ \\ Van Groenou, M., Broese, van Sonderen, E., and Ormel., J. (1990): Test-retest reliability of personal network delineation. In Knipscheer, C.P.M. and\\ Antonucci, T.C. (Eds.): Social Network Research: Substantive Issues and Methodological Questions, 121-136. Amsterdam: Swets and Zeitlinger.\\ \\ == Alter Sampling == \\ McCarty C, Killworth PD, Rennell J. [[http://www.sciencedirect.com/science/article/pii/S0378873307000056|Impact of methods for reducing respondent burden on personal network structural measures]]. Social Networks. 2007; 29:300-315.\\ \\ We examine methods for reducing respondent burden in evaluating alter–alter ties on a set of network structural measures. The data consist of two sets, each containing 45 alters from respondent free lists: the first contains 447 personal networks, and the second 554. Respondents evaluated the communication between 990 alter pairs. The methods were (1) dropping alters from the end of the free-list, (2) randomly dropping alters, (3) randomly dropping links, and (4) predicting ties based on transitivity. For some measures network structure is captured with samples of less than 20 alters; other measures are less consistent. Researchers should be aware of the need to sample a minimum number of alters to capture structural variation.\\ \\ D Golinelli, G Ryan, HD Green, DP Kennedy, JS Tucker, SL Wenzel. [[https://scholar.google.com/citations?view_op=view_citation&hl=en&user=uXdVu6QAAAAJ&cstart=40&sortby=pubdate&citation_for_view=uXdVu6QAAAAJ:eQOLeE2rZwMC|Sampling to reduce respondent burden in personal network studies and its effect on estimates of structural measures]]. Field methods. 2010;22(3):217-230.\\ \\ Recently, researchers have been increasingly interested in collecting personal network data. Collecting this type of data is particularly burdensome on the respondents, who need to elicit the names of alters, answer questions about each alter (network composition), and evaluate the strength of possible relationships among the named alters (network structure). In line with the research of McCarty et al., the authors propose reducing respondent burden by randomly sampling a smaller set of alters from those originally elicited. Via simulation, the authors assess the estimation error they incur when measures of the network structure are computed on a random sample of alters and illustrate the trade-offs between reduction in respondent burden (measured with the amount of interview time saved) and total estimation error incurred. Researchers can use the provided trade-offs figure to make an informed decision regarding the number of alters to sample when they need to reduce respondent burden.\\ \\ === Interviewer effects and/or mode effects === \\ Eagle DE, Proeschold-Bell RJ. [[http://www.sciencedirect.com/science/article/pii/S0378873314000471|Methodological considerations in the use of name generators and interpreters]]. Social Networks. 2015; 40:75-83.\\ \\ With data from the Clergy Health Initiative Longitudinal Survey, we look for interviewer effects, differences between web and telephone delivery, and panel conditioning bias in an “important matters” name generator and interpreter, replicated from the U.S. General Social Survey. We find evidence of phone interviewers systematically influencing the number of confidants named, we observe that respondents assigned to the web survey reported a larger number of confidants, and we uncover strong support for panel conditioning. We discuss the possible mechanisms behind these observations and conclude with a brief discussion of the implications of our findings for similar studies.\\ \\ Kogovsek T. [[http://search.proquest.com/openview/c46da9db35607693c25d53a8ed9527b4/1?pq-origsite=gscholar|Reliability and Validity of Measuring Social Support Networks by Web and Telephone]]. Metodoloski zvezki. 2006; 3(2):239-252.\\ \\ Egocentered networks are common in social science research. Here, the unit of analysis is a respondent (ego) together with his/her personal network (alters). Usually, several variables are measured to describe the relationship between egos and alters. In this paper, the aim is to estimate the reliability and validity of the averages of these measures by the multitrait-multimethod (MTMM) approach. In the study, web and telephone modes of data collection are compared on a convenience sample of 238 second year students at the Faculty of Social Sciences at the University of Ljubljana. The data was collected in 2003. The results show that the telephone mode produces more reliable data than the web mode of data collection. Also, method order effect was shown: the data collection mode used first produces data of lower reliability than the mode used for the second measurement. There were no large differences in validity of measurement.\\ \\ Marsden, PV. [[http://www.sciencedirect.com/science/article/pii/S0378873302000096|Interviewer effects in measuring network size using a single name generator]]. Social Networks. 2003;25:1-16.\\ \\ Name generators used to measure egocentric networks are complex survey questions that make substantial demands on respondents and interviewers alike. They are therefore vulnerable to interviewer effects, which arise when interviewers administer questions differently in ways that affect responses-in this case, the number of names elicited. Van Tilburg [Sociol. Methods Res. 26 (1998) 300] found significant interviewer effects on network size in a study of elderly Dutch respondents; that study included an instrument with seven name generators, the complexity of which may have accentuated interviewer effects. This article examines a simpler single-generator elicitation instrument administered in the 1998 General Social Survey (GSS). Interviewer effects on network size as measured by this instrument are smaller than those found byVan Tilburg, but only modestly so.Variations in the network size of respondents within interviewer caseloads (estimated using a single-item “global” measure of network size and an independent sample of respondents) reduce but do not explain interviewer effects on the name generator measure. Interviewer differences remain significant after further controls for between-interviewer differences in the sociodemographic composition of respondent pools. Further insight into the sources of interviewer effects may be obtained via monitoring respondent–interviewer interactions for differences in howname generators are administered.\\ \\ Paik, A, Sanchagrin, K. [[http://journals.sagepub.com/doi/full/10.1177/0003122413482919|Social Isolation in America: An Artifact]]. American Sociological Review. 2013; 78(3):339-360.\\ \\ This article examines whether existing estimates of network size and social isolation, drawn from egocentric name generators across several representative samples, suffer from systematic biases linked to interviewers. Using several analytic approaches, we find that estimates of network size found in the 2004 and 2010 General Social Surveys (GSS), as well as other representative samples, were affected by significant interviewer effects. Across these surveys, we find a negative correlation between interviewer effects and mean network size. In the 2004 GSS, levels of social connectivity are strongly linked to interviewer-level variation and reflect the fact that some interviewers obtained highly improbable levels of social isolation. In the 2010 GSS, we observe larger interviewer effects in two versions of the questionnaire in which training and fatigue effects among interviewers were more likely. Results support the argument that many estimates of social connectivity are biased by interviewer effects. Some interviewers’ failure to elicit network data makes inferences, such as the argument that networks have become smaller, an artifact. Overall, this study highlights the importance of interviewer effects\\ for network data collection and raises questions about other survey items with similar issues.\\ \\ Van Tilburg, T. [[http://journals.sagepub.com/doi/pdf/10.1177/0049124198026003002|Interviewer Effects in the Measurement of Personal Network Size]]. Sociological Methods & Research. 1998; 26(3):300-328.\\ \\ Methods for delineating personal networks in surveys contain complex instructions for the interviewers. It is assumed that the interviewers' experience and education influence their ability to follow these instructions. The magnitude of the interviewer effects on the personal network size has been investigated, and differences among interviewers have been explained on the basis of their experience and education. The data are from a survey among 4,059 older adults in the Netherlands interviewed in 1992 by 87 interviewers. A strong interviewer effect was observed. Furthermore, the results of a multilevel regression analysis showed that, controlled for respondent characteristics, well-educated interviewers with minor experience prior to the project and major experience within the project (i.e., the high sequence number of the interview) generated relatively large networks.\\ \\ === Meta-analyses / Reviews === \\ Hlebec, V, Kogovsek, T. Different approaches to measure ego-centered social support networks: a meta-analysis. Quality & Quantity. 2013; 1-21.\\ \\ Survey indicators of social networks usually measure a certain function of social networks, for example exchange of social support. Social support is a multidimensional construct. The most comprehensive definition distinguishes among sources of social support (social support networks), supportive acts and appraisal of given support. Generally, two main hypotheses can be given with regard to the role social support plays in quality of life of individuals: that social support is beneficial as such (main effects), or that social support is beneficial at occasions of stressful events (buffer effect). In this paper we are dealing with survey measurement of ego-centered social support networks. Three methods to social network measurement are compared: the name generator method, the role generator method and the event-related approach. In a meta-analysis of several studies done on convenient quota samples the effects of method, type of calculation, response format and limitation of support providers on network composition indicators are studied.\\ \\ Marsden, PV. [[http://www.annualreviews.org/doi/pdf/10.1146/annurev.so.16.080190.002251|Network Data and Measurement.]] Annual Review of Sociology. 1990; 16:435-463.\\ \\ Data on social networks may be gathered for all ties linking elements of a closed population ("complete" network data) or for the sets of ties surround- ing sampled individual units ("egocentric" network data). Network data have been obtained via surveys and questionnaires, archives, observation, diaries, electronic traces, and experiments. Most methodological research on data quality concerns surveys and questionnaires. The question of the accuracy with which informants can provide data on their network ties is nontrivial, but survey methods can make some claim to reliability. Unresolved issues include whether to measure perceived social ties or actual exchanges, how to treat temporal elements in the definition of relationships, and whether to seek accurate descriptions or reliable indicators. Continued research on data quali- ty is needed; beyond improved samples and further investigation of the informant accuracy/reliability issue, this should cover common indices of network structure, address the consequences of sampling portions of a net- work, and examine the robustness of indicators of network structure and position to both random and nonrandom errors of measurement.\\ \\ === Informant accuracy === \\ adams, j, Moody, J. [[http://www.sciencedirect.com/science/article/pii/S0378873305000870|To tell the truth: Measuring concordance in multiply reported network data.]] Social Networks. 2007; 29:44-58.\\ \\ Social network data must accurately reflect actors’ relationships to properly estimate network features. Here, we examine multiple reports of sexual, drug-sharing and social tie data on high-risk networks in Colorado Springs. By comparing multiple reports on the same ties, we can evaluate the reliability of this study’s network data. Our findings suggest that these data have a high level of reporting agreement. From these findings, we discuss implications for analysis of these and similar data and provide suggestions for future social network data collection efforts.\\ \\ Bell, DC, Belli-McQueen, B, Haider, A. [[http://www.sciencedirect.com/science/article/pii/S0378873306000554|Partner naming and forgetting: Recall of network members]]. Social Networks. 2007; 29:279-299.\\ \\ Network researchers must contend with recall, forgetting, alters whose names are not known, and other potential biases in estimating the size of personal (ego) networks. We use data from a longitudinal study of sexual and drug use ego networks. Results show 6% forgetting for 30-day sex partners, 18% for drug use partners, and 26% for close friends. Forgetting is decreased by behavioral specificity and salience. Forgetting increases with network size and time frame. In the domain of sex relationships, global estimates of network size, at least over a period of 30 days, are equivalent to estimates from partner naming 92% of the time if anonymous partners are accounted for.\\ \\ Bernard, HR, Killworth, PD. [[http://nersp.osg.ufl.edu/~ufruss/documents/accuracy%20II.pdf|Informant Accuracy in Social Network Data II]]. Human Communication Research. 1977; 4(1):3-18.\\ \\ This paper repeats and confirms the results of Killworth and Bernard (1976), concerning informants’ ability to report their communication accurately. A variety of self-monitoring, or nearly self-monitoring, networks are used for this study. The conclusion again appears that people do not know, with any accuracy, those with whom they communicate.\\ The expanded experimental design permits a variety of other, related questions to be answered: recall of past communication is not significantly more accurate than prediction of future communication; no one set of data is more accurate than any other; the maintenance of personal logs of communication does not improve accuracy; informants do not know if they are accurate or not; there is no reason to choose either rankings or scalings as a data instrument save for convenience.\\ It is suggested that future research should concentrate both on improving the accuracy of data-gathering instruments and on lessening the reliance of data-processing instruments on precise data.\\ \\ Bernard, HR, Killworth, PD, Sailer, L. [[http://nersp.osg.ufl.edu/~ufruss/documents/accuracy%20IV.pdf|Informant Accuracy in Social Network Data IV: A Comparison of Clique-Level Structure in Behavioral and Cognitive Network Data]]. Social Networks. 1979/80; 2:191-218.\\ \\ This paper examines whether clique-structure in cognitive data (i.e. recall of who one talks to) mayi be used as a proxy for clique-structure in behavioral data (i.e. who one actually talks to). The answer to this question is crucial to much of sociometric and social net-theoretic studies of social structure.\\ We analysed the clique structures of the communication patterns of four naturally occurring groups of sizes 34 to 58, whose actual communications could easily be monitored, together with the groups’ perceptions of their communications. The groups used were: radio hams, a college fraternity, a group of office workers, and an academic department. The analysis used clique-finding, block-modelling, and factor-analytic techniques, all employed in such a way as to maximize the accuracy of the cognitive data.\\ After defining a way to compare clique structures between behavioral and cognitive data, we found that there was no useful relationship between the two, and furthermore there was no significant difference in performance between any, of the structure-finding algorithms.\\ We conclude that cognitive data may not be used for drawing any conclusions about behavioral social structure.\\ \\ Bernard, HR, Killworth, PD, Sailer, L.[[http://nersp.osg.ufl.edu/~ufruss/documents/accuracy%20V.pdf| Informant Accuracy in Social-Network Data V. An Experimental Attempt to Predict Actual Communication from Recall Data]]. Social Science Research. 1982; 11:30-66.\\ \\ This paper seeks to discover whether the known inaccuracy of informant recall about their communication behavior can be accounted for by experimentally varying the time period over which recall takes place. The experiment took advantage of a new communications medium (computer conferencing) which enabled us to monitor automatically all the interactions involving a subset of the computer network. The experiment itself was administered entirely by the computer, which interviewed informants and recorded their responses. Variations in time period failed to account for much of the inaccuracy, which continues, as in previous experiments at an unacceptably high level. One positive finding did emerge: although the informants did not know with whom they communicated, the informants en masse seemed to know certain broad facts about the communication pattern. All other findings were negative. For example, it is impossible to predict the people an informant claimed to communicate with but did not; and it is impossible to predict who the five people are that an informant forgot to mention that she or he had communication with. Thus, despite their presumed good intentions, our findings here confirm what we have learned from six previous experiments: what people say about their communications bears no semblance to their behavior. This suggests that other forms of data gathering, based on questions which require that informants recall their behavior, may well be suspect.\\ \\ Brewer, DD. [[http://www.sciencedirect.com/science/article/pii/S0378873399000179|Forgetting in the recall-based elicitation of personal and social networks]]. Social Networks. 2000; 22:29-43.\\ \\ Forgetting in the recall-based elicitation of personal and social networks poses a potentially significant problem for the collection of complete network data and unbiased measurement of network characteristics and properties. A comprehensive review of the literature shows that forgetting is a pervasive, non-trivial phenomenon in the recall-based elicitation of personal and social networks pertaining to a broad variety of social relations. There appear to be no good predictors of individuals’ proportional level of forgetting, although the number of persons an individual recalls is moderately positively correlated with the number of persons he or she forgets. People seem to be more likely to forget weak ties than strong ties, but the evidence is mixed on this point. In any event, people still forget a significant proportion of their close contacts. Non-specific prompting for additional relevant persons, multiple elicitation questions, and re-interviewing enhance recall slightly to moderately and are the only methods currently available to counteract forgetting, albeit only partially.\\ \\ Hammer, M. [[http://www.sciencedirect.com/science/article/pii/037887338490008X|Explorations into the meaning of social network interview data]]. Social Networks. 1984; 6:341-371.\\ \\ This paper is concerned with issues arising in the use of interview-derived social network data. First, are respondents’ relationships correctly reported? Data from dyads in which respondents and those they named were both interviewed indicate high agreement on the characteristics of the relationships. Second. which relationships are named and which are not? Data from interviews, supplemented with a long list of individuals. some of whom were spontaneously named and some of whom were also known but not named, indicate that respondents select in terms of frequency, recency, and how well they know the person, but. unexpectedly. not duration. Comparison of men’s and women’s selections suggests that women more strongly limit their naming to those they know very well.\\ \\ Killworth, PD, Bernard, HR. [[http://nersp.osg.ufl.edu/~ufruss/documents/accuracy%20I.pdf|Informant Accuracy in Social Network Data]]. Human Organization. 1976; 35(3):269-286.\\ \\ This paper examines the problem of informant accuracy in the production of social network data, through the use of a self-monitoring network. This allows a comparison between cognitive network data and informants' interactive behavior. Against expectations, it turns out that informants are extremely inaccurate. In other words, informants' reports of their behavior bear little resemblance to their behavior. If an informant claimed to have communicated with some person "the most frequently" then, in fact, he communicated with that person between first and fourth most frequently only 52% of the time. The implications of our findings for sociometric and network analysis are: ( 1 ) Attempts to filter out noise in a sociometric network matrix by using sophisticated software are likely to be unproductive. This is because such manipulations assume a much lower level of noise than actually occurs. (2) Due to the low level of informant accuracy, theories of social structure built upon presently available network data are suspect.\\ \\ Killworth, PD, Bernard, HR. [[http://nersp.osg.ufl.edu/~ufruss/documents/accuracy%20III.pdf|Informant Accuracy in Social Network Data III: A Comparison of Triadic Structure in Behavioral and Cognitive Data]]. Social Networks. 1979/80; 2:10-46.\\ \\ This paper provides a comparison of the triadic-level structure inherent in behavioral and cognitive social network data taken on the same group, using a variety of groups whose communication could easily be monitored.\\ It is found that many types of structure occur significantly more or less than chance in both behavioral and cognitive data, and providing that these are treated in similar ways, there is good agreement between the two structures. However, there are several ways to treat behavioral data, and these produce at least two essentially different structures.\\ If cognitive and behavioral triads are compared, triad by triad, then there is virtually no agreement between them (even though they may both display the same structure on an overall triad census).\\ Finally, as a demonstration of the dangers of relying solely on cognitive data, an unlikely null hypothesis is proposed. This asserts - for demonstration purposes - that, under many circumstances, behavioral structure never alters. Change in structure over time apparently occurs because of informant error in the reporting of the cognitive data. A pseudo-transition matrix, giving the probability that a triad is reported as one type when data are first taken, and a different type at a later date, is calculated. This compares reasonably with a genuine transition matrix evaluated for longitudinal cognitive data. It is believed that no data currently exist which can disprove this hypothesis, unlikely though that is. Much more accurate data are therefore necessary if any reliable theory of social structure is to be produced. ~~DISCUSSION~~