Moving to…Nebraska!

I’m excited to finally share the news: I’ll be joining the Information Systems & Quantitative Analysis faculty in the College of Information Science & Technology at the University of Nebraska at Omaha this fall! UNO’s focus on community engagement is a great fit for my research and teaching, and many of the things that appeal to me about the college are basically the same things that drew me to Maryland: an interdisciplinary environment, strong support for research with real-world impact, and interesting people who are passionate about their work.

Of course, it also means I’ll be moving on from the University of Maryland’s iSchool, and I’ll genuinely miss the great faculty, staff, and students in College Park. The iSchool has been a wellspring of opportunity and a wonderful intellectual home for me over the last three years, and I’ve learned more than I could have imagined.

But.

One of the things I’ve learned about is making difficult decisions. The specifics are less important than the bottom line: my family comes before my job. It wasn’t an easy choice, but moving to Nebraska will let me continue doing the same kind of awesome work while maintaining a happy household.

And that’s what we expect to find in Nebraska: the Good Life.

Nebraska...the good life

Image: CC-BY 2008, Thomas Beck

Citizen Science at AGU 2015 Fall Meeting

I went to my first meeting of the American Geophysical Union in December. It was quite the experience; I’ve never seen academic conferencing on such a scale before. I liked the primarily-posters format because it was much more interactive overall–I could linger and discuss where I wanted to and skip the stuff that was less interesting to me. And to my surprise, there was a lot more that interested me (mostly in the earth and space informatics section) than I had initially expected.

However, it was hard to find the citizen science content, aside from that which was labeled as “education” despite a primary focus on science over outreach. With such a massive program, it’s pretty important to be able to search effectively, and I missed a lot of good stuff just because I didn’t know how or where to find or look for it. I made it to just 2 oral presentation sessions that featured citizen science in the session title; most other citizen science presentations were unobtrusively tucked into sessions with titles that presumably focused more on the science than the process and participants.


 

Climate Literacy: Improving Climate Literacy through Informal Learning & Citizen Science 1
December 12, 2015

Realizing the Value of Citizen Science Data, Waleed Abdalati

Perspective matters: diversity of the public is part of benefits. He was NASA Chief Scientist at time of Curiosity landing.

4 part series of TV segments on citizen science, starting with CBC at Everglades. Then gives the example of NPN & Nature’s Notebook. Points out that these are good data because people really care, as much or more than professional scientists. CoCoRaHS is another example, video of NWS staff setting up an alert based on CoCoRaHS data, process between report and radio alert is 2-3 minutes.

Another series – the crowd & the cloud. #1 Even big data starts small; #2 Viral vs. virus; #3 Feet in the field, eyes in the sky; #4 Citizens 4 Earth. Smartfin – surfer science.

Fantastic high quality video, really compelling teaser for the series. Will air in 2017 on PBS.

Q: this takes skill, how is training done?
A: CoCoRaHS has training protocol.


Citizen Science Contributions: Local-scale Resource Management and National-scale Data Products, Jake Weltzin

“from kilometers to continents”

Monitoring for decision-making at Valle de Oro NWR, first urban wildlife refuse in Albuquerque. Needed decision info and also public engagement. Data presented with bar chart that shows when migratory species are present at refuge so they know when to manage for them. Also working on wetland restoration–reducing Siberian elm and increase Rio Grande cottonwoods. Checking the flowering an fruiting–Siberian elm is leafing and flowering about a month ahead of cottonwood, and you need bare ground for cottonwood to propagate, so they need to remove the Siberian elm in the month before cottonwoods in order to prompt cottonwood growth.

Product framework: phenology status data goes to phenometrics; climate and meteorological data goes to phenology models and integrated data sets; remote sensing data also goes into integrated data sets; phenometrics goes into phenology models; final products are gridded maps and datasets, and short-term forecasts.

Showing NCA annual start of spring based on lilac data. Very pretty maps of “PRISM data set” for start of spring, 4km scale national map. Local version maintains granularity and scales down to NPS locations, so you can see first leaf index for park locations. But NPS cares less about when things happen than change from historic record, since the data go back to 1900, they can show biological response to climate change at level of national parks.


Putting citizen-collected observations to work — CoCoRaHS, Nolan Doesken

Starts with funny 2-minute animated intro “each measurement is like a pixel in a picture”. Talks about 1997 flood in Fort Collins–60% of library holdings were destroyed as they were in the basement due to work on upper floors. Recent expansion into Canada and Bahamas; now has over 20K volunteers.

Goals are quality precipitation, and also education & outreach. Easy low-cost equipment is important–gauge is equivalent to that used for historic climate monitoring that NOAA does, therefore can fit into long history of measurements. Mobile app for data submission as well as web forms; permanently archive data and provide raw data and summary reports. Data are good for supplementing other sources like COOP.

Data tend to be accurate, spatially detailed (except in Nevada–not enough people), timely, etc. Who uses data? Weather forecasters, hydrologists, water management, researchers, agriculture, climatologists, health, insurance industry, tourism. Data are fed into weekly US Drought monitor process, drought conditions are reducing. Snow data is hard to get so their sources are valuable. National Hurricane Center uses the data in post-storm summaries to describe the impacts.

Challenges: owl sitting on top of rain gauge! Much more male than female, very white, mostly college educated. Age demographics leans toward older, those who stick with it tend to be from that demographic even though they have good rates of signup for younger and more diverse demographics. Recruiting at national scale is tough. They have over 250 local volunteer leaders; need to recruit and train 3K new volunteers per year to balance attrition.

Cost effective but not free; after 18 years, still hanging on. Photos of a bear checking a rain gauge.

Q: GEO group looking for improving in situ precipitation measurements, especially in Africa. How to export to Africa?
A: It’s a matter of finding local leaders who care about local precip. Putting local face on project is more compelling than most other options. Subsidize the rain gauge cost, and then communication is the next consideration–need infrastructure.


Crowdsourcing science to promote human health: new tools to promote sampling of mosquito populations by citizen scientists, Rebecca Boger

GLOBE program–international citizen science in the classroom, 20 years old. Discussing how materials are developed, new mosquito larvae protocol.

Train-the-trainer model with F2F workshops–big backlog and long waits to join program, so moving into an LMS. Developing training slides for 50+ protocols, will be available in 2016, emphasis on knowing how to conduct protocol and not pedagogy. They have to pass quizzes before being able to set up a login and get full access.

Developing a mosquito monitoring protocol: can do genera ID with hand lens, species ID with microscope and experts. Sampling from containers as well as ponds, streams, puddles. Lots of research questions students can explore with the data. Have to get it up at the end of the year; will be doing a field campaign early next year to launch new protocol.


Era of Citizen Science and Big Data: Intersection of Outreach, Crowd-Sourced Data, and Scientific Research 1
December 18, 2015

The Citizen CATE Experiment for the 2017 Total Solar Eclipse, Matthew Penn

Working with 3 government research labs, 3 corporate partners, 4 universities, 3 K12 teachers, and participants. Donating telescopes to observers after event, sponsors include the companies who make software and filters.

Upcoming eclipse on August 21, 2017 will drive tourism, will be most viewed eclipse in history. Total eclipse opens up a window for viewing the inner corona in a way we can’t from space. The part easily viewed from an eclipse is the hardest part to study from a spacecraft. Planning to look at what is happening with polar plumes–they’re interesting but they need more data than observations from 3.5 minutes from one location. Looking at the eclipse in Mongolia in 2009, they knew they would be able to see scientifically interesting events.

Path of totality goes from PNW to South Carolina, to plan is to provide identical telescopes or volunteers to use at specific locations, transfer ownership after the event, and support ongoing use of telescopes. While it will be viewed for only 2.5 minutes in each single location, the entire path of totality is 90 minutes.

Funding needs about $180K for equipment alone: 60 sets of telescopes, filters, software, mount and drive, still need $ to cover cameras and laptops. Expecting about 26 GB data per site, 1560 GB (TB?) in total. Sending data via 3-day priority mail, equivalent of 6 MB/second, upload of about 2 GB on day of eclipse itself.

Afterward, they’re looking to develop additional projects for work on comets (can’t get major telescope time), solar programs on sunspots, and variable stars with prototype equipment in partnership with AAVSO.

Proof of concept: Did one day of training with a volunteer who was going to Faroe Islands in Mar 2015, conditions were lousy, but for 15 seconds got data of inner corona. The harder job was shipping it around the world and using crummy software. For a more prepared test, doing a train-the trainer training with 5 locations in Indonesia for March 2016 to verify process.

Interested? mpenn@nso.edu; mpenn@noao.edu; sites.google.com/site/citizencateexperiment

Q: how much does weather matter? A: weather isn’t great for about half the range, tends to be 60% cloudy. But with 60 sites they should get good coverage, and they’re hoping for 100, but if they add more they have to add where it’s cloudy to hopefully get more data from the sparser areas. Some range of +/- 10 miles to move in, but expect some gaps.


Synergetic Use of Crowdsourcing for Environmental Science Research, Applications & Education, Udaysankar Nair

Motivated by needs for data that aren’t collected by agencies but suited to crowdsourcing with compute platforms like Google Earth Engine.

Using ODK, “end to end design” of system, that pushes data to Fusion Tables and Google Earth Engine, merged with sat imagery from NASA via a maps engine.

Land Use & Land Cover Change data currently relies on remote sensing data, but it needs ground truthing for contextual information. Many potential uses for data.

Claims 4m accuracy for GPS on app. Can use ODK offline to collect data–step by step overly simplified form, usability could be problematic.

Tested with a MS classroom, introducing with the topic of biomes. Requires lesson plan, including learning standards. Had kids use mobiles with ODK to track land cover in their neighborhood. Also did some work with student teachers in India for mapping small water bodies to support Kerala State Biodiversity Board. Also looking at collecting data on open water containers for vector borne disease research; frost occurrence; damage after severe weather. Doesn’t mention how this is fed back to students.


 

LastQuake: Comprehensive Strategy for Rapid Engagement of Global Earthquake Eyewitnesses, Massive Crowdsourcing, & Risk Reduction, Remy Bossu

Points to eBird: you can’t do this for earthquakes because target reporters are eyewitnesses. Focus on felt earthquakes, looking at SM activity and speed of feedback so info needs to be available across SM platforms. QuakeBot, apps & add-ons are intended to automatically merge direct & indirect eyewitness contributions, seismic data, and other sources.

Can’t identify “felt EQ” with instruments, but can via SM. Just look at tweets with earthquake in them in US. But not every place uses Twitter that much. They use real-time web traffic to their authoritative site to figure this out based on IP addresses, could tell Kathmandu had not been flattened b/c they continued getting visits after several minutes.

citizenseismology.eu, @LastQuake

During Nepal event, made automatic map but did not predict intensity until about 19 minutes, confirmed damage at 20 minutes, published 38 tweets in that time during which there were main shock and 5 felt aftershocks. Working to develop an app, UI improvements, gets better geolocated pics & videos, sharing comments to SM, and push notifications. Got decent data despite the fact that the quakes they have recorded were areas where LastQuake aren’t well known. They validate pics for lack of IP infringement, respect for human dignity, and accuracy to known issues.

Quick rise in app downloads 10 min after Nepal. After 9 days, had identified most of quakes post-main-event. 85% of access from Nepal via Mobile, with 1/3 via app & 70% of reports. Traffic picks up in under a minute of shocks. Case on December 7: 110K downloads, 82K in operation (75% retention). Saw app launched within 1 minute of event and notifications: immediate response worldwide.

One KPI is number of responses within 30 minutes. Examples where they aren’t well known: Afghanistan, Arizona, England, Malaysia–hundreds of responses in each case, 2400 for AZ.

How were people finding them? This is only app providing info on felt earthquakes. It only takes hours for info to be shared. So they asked for feedback–what improvements? They wanted help, what to do in earthquakes. So developing visual pop-ups with do’s and don’t’s: visual popups (stay away from buildings, don’t call 911 unless injured), adding an “I am safe” button. Seeing this as risk reduction information for public: reduce inappropriate behaviors and fatal errors.

Q: Implications of using this for catastrophes, like anthropogenic disasters like shooters? How can you verify the truthing, veracity of content?

A: Rapid-onset events are easy to tell: eyewitnesses hit the website within 2 minutes and others don’t know of the event yet to falsify, but floods are much harder to tell. They don’t see a lot of people messing with the comments, the “not so bright” people are easy to spot. Can easily remove outliers, likely not because they are lying but because they are so emotional. With pictures, it’s not about reliability–photo of small crack in the wall isn’t useful to them, care more about larger damage.


CosmoQuest: Building community around citizen science collaboration, Pamela Gay

Data landscape for space science data is changing dramatically–“horrific data flood flying down upon our heads and across our internet connections”. Need help handling tons of data. Can’t get enough postdocs, have to open the doors to the ivory tower. Open data and open access will help, but requires supporting community: curricula, projects specific to grade level, adult learning, planetarium & science on sphere content to recruit and disseminate, crowdsourced podcasts, “guerrilla science” at science-related events.

Current projects focus on surface science. CitizenScienceBuilder for image annotation. TransientTracker for photometry and other products. Building data products and simulations. Portals like Moon Mappers, Asteroid Mappers, etc. Funded through 2020 with some pre-selected projects, but if all goes well, there will be an RFP for projects with details on how to ensure that results get published for small grants up to $60K. Providing educational materials, curricula, etc.

They partner with a lot of programs for podcasts, live YouTube events with up to 5K attendees, in-person events. “Come science with me”.


A method to harness global crowdsourced behavior to understand something about avalanches, Jordy Hendrikx

Snow avalanches cause 30 deaths/yr in US, up to 500 fatalities worldwide. $1B damage in US alone. Also dramatic uptick in backcountry users, and fatalities increase slower than usage so education is likely helping. Historically 4 parts to risk: snowpack, weather, terrain, and most importantly people. Need to understand their decisions.

Research that tries to look at causes of avalanche fatalities tries to understand accidents based on result, but fatalities are usually a cascade of errors so it’s hard to figure out a causal factor. Rather than a consequence of a series of decision, try to go to “top of cliff” to figure out which groups more likely to be at risk in future due to behavior, and then use targeted education. Goal is prevention via behavioral understanding.

Crowdsourcing by taking realtime GPS tracks on a smartphone app, then do Internet surveys about decision-making they can connect to it. Using a marketing approach to decisions. They describe & quantify travel practices in concert with group decision making dynamics and participant demographics, using GPS track as expression of decisions and terrain use.

Sending people to webpage sounds easy but is hard, need to advertise and get word out, that’s harder than scientists think it will be. Then they show simple flowchart–sign up, download app, track trips, auto-reply afterward, fill in survey. Have been doing this since 2013/4 season, noticing that there’s self-selection bias among who participates and trying to grow sample to broader range so as to get more behavioral insight. Using snowball sampling via SM, word of mouth, but have to reflect culture of a crowd, not stuffy white lab coat. Getting thousands of track from around the world.

Outreach is critical–presenting at workshops & public events, publications in popular press.

 

The Unquantified Self

I seem to be an early adopter, and by extension, an early un-adopter. I started using a high-end pedometer every single day about 10 years ago, long before the current activity tracker craze. I started off with fancy Omron USB pedometers and wore 2 Fitbits to shreds before losing the third one.

I responded incredibly well to tracking and monitoring. Too well, in fact.

For me, activity trackers prompted obsessive behavior, especially the Fitbit, since it permitted editing data for better accuracy. For awhile, I also (manually) tracked several other health-related covariates until I realized how much of an unnecessary, emotionally unhealthy, and ultimately useless data-generation burden I was putting on myself. It had become yet another stressor and told me nothing new. When I switched to the Withings Pulse, I couldn’t edit my data, so I had to stop taking it so seriously. That was a real relief.

I kept using the Pulse for a couple years but eventually it was solely because I got into birding and I’m extremely anal about data quality (it runs in the family, seriously).

But after 8+ years of wearing an activity tracker, the extent of my use case had shriveled to wearing it a wristwatch and recording distances traveled while birding.

Soon that stopped being adequate reason for constant self-surveillance. The privacy issues were not the main reason I stopped wearing my activity tracker last year.

I came to the conclusion that quantifying myself was unkind to myself. I am a whole person, not a bag of numbers, and boiling my day down to a couple of statistics fooled me into thinking that they were somehow meaningful or important. Not to mention promoting even more self-centered attitudes that weren’t socially productive.

For me, the gamified interfaces were a further insult to my sense of agency. They didn’t empower me; they enslaved me. I told myself otherwise for years, but the reality is that I was willing to use that data as warrant to treat myself more harshly and judgmentally than I would treat any other human being. That’s a no-win situation.

Initially, it seemed useful, but after a few years, the data stopped telling me anything new and I stopped trying to use it for self-improvement. A few years later, I no longer even paid attention to the data because when I did, it made me feel bad. I just wore the device out of habit, and then out of my devotion to generating top-quality bird data.

I stopped wearing it overnight. There was zero value to the sleep data andI rediscovered a strong preference to sleep unencumbered without a digital device on my wrist; I also kicked my phone out of bed. I found delicious contentment in settling into an electronics-free bed. All I was doing was starting to draw boundaries: no more technology in bed because that’s not what a bed is for. I don’t even take my phone into the bedroom at all anymore, because that’s not what the bedroom is for. This is a basic principle of good sleep hygiene: reserve the bedroom for its limited intended uses. When I was a kid, no one ever would have imagined having a phone in the bed.

Then I stopped putting the activity tracker back on in the morning. My life didn’t change at all, except I no longer had an ugly, uncomfortable lump of black silicone and plastic strapped to my wrist. I forgot I even had the silly thing lying around.

Months later, I don’t miss it. Not even a tiny bit.

Instead, I feel like I’ve regained a speck of privacy and humanity. The more that my life is distilled into numbers like H-indexes and citation counts, the more value I place on the freedom to be unquantified.

“Citizen Science in Context” at 4S2015

Attending the annual meeting of 4S (the Society for the Social Studies of Science) in Denver this week has been lovely. It’s a delight to reconnect with colleagues across diverse spaces and make new acquaintances, all the while talking about science.

In the last 2 days alone, I’ve discussed killer robots, citizenship in citizen science, scientific conference cultures, the ups and downs of academia, the Federal Toolkit, and how PCS algorithms are invisibly affecting scientific careers by pre-assigning the wrong people to reviewers based on vocabulary problems.

Below I’ve posted my minimally-edited session notes from November 13’s session on Citizen Science in Context. Enjoy?


From the citizens’ point of view: Small scale and locally anchored models of citizen science
Lorna Heaton, Florence Millerand, Patricia Dias da Silva

Background focusing on large-scale growth of citizen science, usual themes around potential for exploitation. Sees smaller, locally anchored models as productive of new opportunities for meaningful engagement.

Alerta Ambiental: reporting around land-based activities for legal action, and environmental monitoring.

ONEM: species observations in France, basic wiki-based observation form for species of interest. Participant benefit in awareness of local habitat.

Flora Quebeca: knowledge exchange on Quebecois flowering species. Initial concerns around rare species harvesting. Discussions on Facebook around photos of rare species. Lots of learning via moderation. They provide ID keys, quizzes, etc.

Engagement that is specific to localized projects, distinct from larger-scale (so-called…) “decontextualized” projects. Tech mediation but strong local situation around sense of place. Shapes how knowledge is produced. Online and offline interactions are interrelated. Local citizen science supports understanding world nearby, public engagement beyond the local, and tech mediation that complements colocated participation and interaction. Sees online as potentially valuable for inclusion, learning, empowerment.

Q about how it’s science, not just activity.

A: Some of the data were used by researchers.


Citizen Science and Science Citizenship: same words, different meanings?

Alan Irwin

Points to development of ECSA, Fondation Science Citoyennes, explosion of growth. Questions around semantics of the terminology.

Agenda of European Environment Agency is dramatically different from Zooniverse. Many different meanings, term with interesting ability to capture attention.

He was at CSA 2015. Contrast in meanings of term used there among 600 participants were interesting. Highlights Chris Filardi’s talk: “they picked me up and put me inside their questioning community”. Contrast to Amy Robinson’s talk on Eyewire, enormous enthusiasm about what they’re doing (NB: one of the coolest keynotes I’ve seen in years). Notes the variation in scale–intense ethnographic experience on an island, vs 160K people in gamified environment online. Both connected to citizen science, but do they have something in common or not?

Yes, in that understandings and knowledge connect with epistemologies. Cites Haklay 2013 levels of participation in citizen science, not critiquing but attempts to categorize things that are dramatically different. But “categorizations are not innocent” in how they define the space. Extreme according to whom? Reflects a view from the ivory tower focusing on human-knowledge interface, overlooks the organizational aspects to create the systems. Nothing about how resistance can be the substance of it, how it can be a provocation or challenge.

Form (style) is less important than goals: sense of movement more valid, moves toward scientific citizenship. What if Eyewirers started asking questions about how the platform and the people there create a type of academic capitalism? What if the relationships on the island were hoovered up, with people treated as standardized sensors? Change can go both ways, can lead toward more rich development.

Concept around scientific citizenship–focused on more controversial areas of science and tech development, raises questions about relationships between knowledge and democracy. Cognitive justice as a keyword. Potential for scientific citizenship via distributed expertise, opening up science to society, practiced engagement, scientific-institutional-citizenship learning?

Potential of citizen science for scientific citizenship: is there evidence of it? Relatively little. More low-level engagement right now. Is the potential there? Yes, but:
1. Citizen science needs to be seen as a challenge, disturbance, or provocation to science, not solely an extension.
2. Questions of control: it can’t always be science-led.
3. Citizen dimensions should be taken as seriously as the science. What’s the model of citizenship and purpose of engagement?
4. Concepts like “epistemic justice” should be brought into the discussion
5. Institutional learning needs to be addressed in structural terms.
6. Citizen science must be taken in the wider context of sociotechnical relations.

Feels STS can bring important elements to discussion, but right now STS is very marginal to the discussion.

Q: usage of term “citizen” implying both responsibilities and rights.
A: more attention to scientific perspective than question of what do we mean by citizenship, what are possible implications of this, could it be a way to open up?


Negotiating the concept of data quality in citizen science

Todd Suomela

RQs: what is discourse around data (quality) in citizen science, how is that negotiated?

Background with dissertation on framing citizen science in journalism, DataONE internship, and data quality panel at CSA conference. Internship announcement came out of working group, reflects a perspective of needing work to justify the value of the work. Panel at CSA 2015, summary that many projects use multiple mechanisms to influence data quality; methodological iteration is common in developmental stages; methods sections in published papers capture only part of the mechanism decisions made by researchers, e.g., confusion matrices.

Theoretical interlude: social worlds and situational analyses. Publics and sciences: responding to consequences of actions and the dependency of science on communication between scientists and the public.

Positional mapping with a split between public and science, and orthogonal relationship between social worlds: insiders to regulars to tourists to outsiders. Project scientists/staff, educators, external scientists, journalists-writers. Themes include data and visible feedback, positioning the individuals’ work in bigger picture.

For some, citizen science is a new label for an old thing. Promoting engagement with data in deeper ways is a key goal for many project staff. Visible and rapid feedback makes it easier for volunteers to see the value, and is important in design conversation. Quality is an obsession for insiders working on citizen science but strangers to this social world, both scientific and public, remain skeptical.

Calls for future work on data quality perceptions among scientists outside of current citizen science communities, links to more work on science studies.

Stitch Fix: Efficient Fashion for the Professoriate

Over the last few months, I’ve had to really up my game in a number of categories, including personal appearance. PhD students and even postdocs pretty much all wear utilitarian, cheap clothing, and when I got a faculty job, I knew my well-worn and overly casual wardrobe wasn’t going to cut it anymore.

I forced myself to do some shopping, all the while cringing at how much time it took to find just one or two items. Let’s face it, the last thing a new junior faculty member has time for is clothes shopping. As the semester progresses–and the weather gets colder in spite of my lack of appropriate layers–this becomes even more true.

So like many of you, I’d heard of this thing called Stitch Fix. When I looked more closely at the details, I figured it was worth a gamble: if even one item worked out for me in a shipment, it would be an improvement over trying to find it myself. And when my first Fix arrived this week, I actually kept three items–a total win!

Here’s why I think Stitch Fix is a great solution for academics:

  1. Academics need to look professional (at least occasionally), but rarely have the interest, patience, fashion sense, or time to go shopping. They usually have enough disposable income to selectively acquire items priced above fast fashion rates. Their time is worth enough to them that it’s easy to make a strong economic argument for outsourcing clothing selection.
  2. There’s an adequately extensive style profile to ensure that you get appropriate items, but it won’t take all day to fill out. You can also send your stylist short notes for each Fix (I told mine that I need some items in school colors, for example).
  3. Internet-and-USPS powered. No trip to stores or malls. No crowds or pressure. Shipping prepaid in both directions. Super efficient!
  4. You try on the clothes at home, under normal lighting, at your leisure (within 3 days of receipt). This is wonderful. It’s a zero-pressure environment and you can make a much more confident purchase decision once you’ve tried pairing items with what’s in your closet already.
  5. They send things you wouldn’t have picked, but which you should try anyway. Since there are only 5 things to try on, you might as well try all of them–and you might even like them! I scored two of those in my first Fix.
  6. The higher per-item cost is completely and utterly worthwhile because #3. I also immediately realized how much I was limiting myself by using price as a first-round filter for what I try on, so this provides a counterbalance.
  7. The style cards are awesome: they show each item you got in a couple of different configurations, to give you ideas on how to wear them. As a result, I pulled out my leather knee boots for the first time in years, and they looked great with my new blouse and skirt! (Note for any librarians in the house: the style cards accumulate into a catalog of your wardrobe!)
  8. There’s a feedback cycle to improve your selections over time and let your stylist know if you need something special for an upcoming event or want to try something new.
  9. Did I mention that it saves a ton of time?

I can think of no better testament than pointing out that they sent a pair of (skinny!) jeans that fit really well on my very first Fix. As any woman knows, the search for good jeans can be a lifelong quest, so having someone I’ve never met send me a pair that fits beautifully? Simply amazing!

If you’re adequately convinced to try Stitch Fix for yourself, please do me a solid in return and use my referral link: http://stitchfix.com/sign_up?referrer_id=4201271

Responding to Reviewers

“Revise and resubmit” is really the best outcome of academic peer review – acceptance for publication as submitted is so rare it may as well not exist, and most papers are genuinely improved through the peer review and revision processes. Generally speaking, an additional document detailing changes must accompany the revised submission, but the conventions for writing these “change logs” are a little opaque because they’re not typically part of the public discussion of the research.

San Antonio Botanical Gardens during CSCW 2013

There are a couple of great examples of change logs from accepted CSCW 2013 papers from Merrie Morris, and I’m offering my own example below as well. It’s no secret that my CSCW 2013 paper was tremendously improved by the revision process. I wrote the initial submission in the two weeks between submitting my final dissertation revisions and graduation. For a multitude of reasons, it wasn’t the ideal timing for such an endeavor, so I’m glad the reviewers saw a diamond in the rough.

My process for making revisions starts with not getting upset about criticism to which I willingly subjected myself – happily, a practice that becomes easier with time and exposure. (If needed, you can substitute “get upset/rant/cry in private, have a glass of wine, cool off, sleep on it, and then come back to it later,” which is a totally valid way to get started on paper revisions too.) Hokey as it sounds, I find it helpful to remind myself to be grateful for the feedback. And that I asked for it.

Then I print out the reviews, underline or highlight the items that need attention, and summarize them in a few words in the margin. Next, I annotate a copy of the paper to identify any passages that are specifically mentioned, and start to figure out where I need to make changes or could implement reviewers’ suggestions. I find these tasks much easier to do on paper, since being able to spread out all the pages around me sometimes helps when working on restructuring and identifying problem points.

During or after that step, I create a new word processing document with a table and fill it in with terse interpretations of the comments, as you’ll see in the example below. In the process, I sort and group the various points of critique so that I’m only responding to each point once. This also ensures that I’m responding at the right level, e.g., “structural problems” rather than a more specific indicator of structural problems.

The actual columns of the table can vary a little, depending on the context – for example, a table accompanying a 30-page journal manuscript revision in which passages are referenced by line number would naturally include a column with the affected line numbers to make it easier for the reviewer to find and evaluate the updated text. In the example below, I made such substantial changes to the paper’s structure that there was no sense in getting specific about section number, paragraph, and sentence.

As a reviewer, I’m all for process efficiency; I strongly prefer concise documentation of revisions. At that stage, my job is to evaluate whether my concerns have been addressed, and the documentation of changes should make that easier for me, rather than making me wade through unnecessary detail. Likewise, as an author, I consider it a problem with my writing if I need to include a lengthy explanation of why I’ve revised the text, as opposed to the text explaining itself. That heuristic holds under most circumstances, unless the change defies expectations in some fashion, or runs counter to a reviewer’s comment — which is fine when warranted, and the response to reviewers is the right place to make that argument.

Therefore, the response to reviewers is primarily about guiding the reviewer to the changes you’ve made in response to their feedback, as well as highlighting any other substantive changes and any points of polite disagreement. In a response to reviewers, the persuasive style of CHI rebuttals, the closest parallel practice with which many CSCW authors have experience, seems inappropriate to me because the authors are no longer in a position of persuading me that they can make appropriate revisions, but are instead demonstrating that they have done so. Ergo, I expect (their/my) revisions to stand up to scrutiny without additional argumentation.

Finally, once all my changes are made and my table is filled in, I provide a summary of the changes, which includes any other substantive changes that were not specifically requested by the reviewers, and note my appreciation for the AC/AE and reviewers’ efforts. A jaded soul might see that as an attempt at flattering the judges, but it’s not. I think that when the sentiment is genuine, expressing gratitude is good practice. In my note below, I really meant it when I said I was impressed by the reviewers’ depth of knowledge. No one but true experts could have given such incisive feedback and their insights really did make the paper much better.

——————————

Dear AC & Reviewers,

Thank you for your detailed reviews on this submission. The thoroughness and depth of understanding that is evident in these reviews is truly impressive.

To briefly summarize the revisions:

  • The paper was almost completely rewritten and the title changed accordingly.
  • The focus and research question for the paper are now clearly articulated in the motivations section.
  • The research question makes the thematic points raised by reviewers the central focus.
  • The analytical framework is discussed in more depth in the methods section, replacing less useful analysis process details, and is followed up at the close of the discussion section.
  • The case comparison goes into greater depth, starting with discussion of case selection.
  • The case descriptions and comparison have been completely restructured.
  • The discussion now includes an implications section that clarifies the findings and applicability to practice.

Below are detailed the responses to the primary points raised in the reviews; I hope these changes meet with your approval. Regardless of the final decision, the work has unquestionably benefited from your attention and suggestions, for which I am deeply appreciative.

Reviewer Issue Revisions
AC No clear research question/s A research question is stated toward the end of page 2.
AC, R1, R3 Findings are “obvious” The focus of the work is reframed as addressing obvious assumptions that only apply to a limited subset of citizen science projects, and the findings – while potentially still somewhat obvious – provide a more useful perspective.
AC, R2 Conclusions not strong/useful A section addressing implications was added to the discussion.
AC Improve comparisons between cases Substantial additional comparison was developed around a more focused set of topics suggested by the reviewers.
AC Structural problems The entire paper was restructured.
R1 Weak title The title was revised to more accurately describe the work.
R1 Does not make case for CSCW interest Several potential points of interest for CSCW are articulated at the end of page 1.
R1 Needs stronger analytic frame & extended analysis The analytic framework is described in further detail in the methods section, and followed up in the discussion. In addition, a section on case selection criteria sets up the relevance of these cases for the research question within this framework.
R1 Quotes do not add value Most of this content was removed; new quotes are included to support new content.
R1, R3 Answer the “so what?” question & clarify contributions to CSCW The value of the work and implications are more clearly articulated. While these implications could eminently be seen as common sense, in practice there is little evidence that they are given adequate consideration.
R1 Include case study names in abstract Rewritten abstract includes project names.
R1 Describe personally rewarding outputs in eBird These are described very briefly in passing, but with the revised focus are less important to the analysis.
R2 Compare organizational & institutional differences Including these highly relevant contrasts was a major point of revision. A new case selection criteria section helps demonstrate the importance of these factors, with a table clarifying these contrasts. The effects of organizational and institutional influences are discussed throughout the paper.
R2 Highlight how lessons learned can apply to practice The implications section translates findings into recommendations for strategically addressing key issues. Although these are not a bulleted list of prescriptive strategies, the reminder they provide is currently overlooked in practice.
R2 Comparison to FLOSS is weak This discussion was eliminated.
R2 Typos & grammatical errors These errors were corrected; hopefully new ones were not introduced in the revision process (apologies if so!)
R3 Motivation section does not cite related work Although the rewritten motivation section includes relatively few citations, they are more clearly relevant. For some topics, there is relatively little research (in this domain) to cite.
R3 Motivation section does not discuss debated issues The paper now focuses primarily on issues of participation and data quality.
R3 Consistency in case description structure The case descriptions are split into multiple topics, within which each case discussed. The structure of case descriptions and order of presentation is consistent throughout.
R3 Include key conclusions about each case with descriptions The final sentence of the initial descriptions for each case summarizes important characteristics. I believe the restructuring and refocusing of these revisions should address this concern.
R3 Does not tie back to theoretical framework used for analysis The Implications section specifically relates the findings back to the analytical framework, now discussed in greater detail in the methods section.
R3 No discussion of data quality issues This is now one of the primary topics of the paper and is discussed extensively. In addition, I humbly disagree that expert review is unusual in citizen science (although the way it was conducted in Mountain Watch is undoubtedly unique). Expert data review has been shown to be one of the most common data validation techniques in citizen science.
R3 No discussion of recruitment issues This topic is now one of the primary topics of the paper and is discussed extensively.
R3 Introduce sites before methods The case selection criteria section precedes the methods and includes overview descriptions of the cases. They are also given a very brief mention in the motivation section. More detailed description as relevant to the research focus follows the methods section.
R3 Do not assume familiarity with example projects References to projects other than the cases are greatly reduced and include a brief description of the project’s focus.
R3 Tie discussion to data and highlight new findings While relatively few quotes are included in the rewritten discussion section, the analysis hopefully demonstrates the depth of the empirical foundation for the analysis. The findings are clarified in the Implications section.
R3 Conclusions inconsistent with other research, not tied to case studies, or both To the best of my knowledge, the refocused analysis and resultant findings are no longer inconsistent with any prior work.