Passive crowdsourcing- finding the data that people don’t know they’re creating

I have a new paper out! You can read it: Predicting plant attractiveness to pollinators with passive crowdsourcing.1

2016-06-01 09.19.12

Just a human engaging in a normal human leisure activity.

A while back, my colleague, Doug Landis, was searching the web for pictures of flowers for  a project about native plants, and noticed that some pictures of flowers he looked at frequently captured insect visits.  He got to wondering- do the bees we occasionally observe in this sort of photo have…meaning?

He asked me what I thought of his observation- and could we test it. Are flowers that are photographed more frequently with insects, indeed, more attractive to insects? This idea got me pretty excited. If you’ve been following me for long, you know I delight in finding data and patterns in places we don’t normally think to look. It’s kind of my *thing.*  And through the course of everyday activities, humans passively collect data about the world around them. It seems to reason that common leisure activities -like photographing and sharing pictures of flowers– could potentially capture ecological phenomena- in this case, the visitation rates of pollinators to flowers of different species.

So we developed a method to test our hypothesis.  Using a technique we termed ‘passive crowdsourcing,’ we searched Google Images for pictures of blooms of 43 common flowering plants that are native to Michigan, and identified insects that were visible visiting the flowers in the photos. We then compared these observations to visitation rates observed in controlled experimental trials using these same plants. We found that we could predict how often a flower was visited by wild bees by the number of visits we observed in the internet images, although relationships were less clear for honey bees and bee mimicking flies. Patterns were strongest for flowers that bloom in late summer, when more bees tend to be around in our area.

We’re pretty excited to see how passive crowdsourcing can be applied in the future. This method could be used by scientists to make predictions about other ecological phenomena that may be documented by human use of the web. Essentially anything that people tend to photo-document with any frequency could be capturing data, and could potentially help unlock the scientific mysteries of the future!2

1. And it’s all completely open access, because that’s how I roll. Our raw data and code is available here.
2. If anyone asks you to put down your phone and stop taking pictures of everything, you can gently explain how you’re advancing science. 🙂

Advertisements
Posted in Uncategorized | Tagged , , , , , , , | 6 Comments

Why the heck aren’t research papers free?

2016-03-14 13.37.23

*yelling* Is this even a question? Of course they…

Should all research papers be free?

Um. Yeah. Obviously. Scientific knowledge doesn’t do much good if it’s all locked up, only accessible to the rich and/or privileged.

When I put it that way, I never get much argument.1

In fact, the majority of scientists I know tend to agree. They want their work to be readable to the most people possible. If someone is as interested as me in how overwintering soybean aphid eggs acquire heat from sunlight due to their coloration and placement on the plant good lord I don’t want anything to stand in their way because this person will, very likely, be my new best friend. But things do stand in the way (yup, my article above is paywalled  ETA: hey cool! looks like the embargo period is over! read all about my cool model, everyone!!), and it doesn’t just make scientists lonelier people.2 It keeps the science out of the hands of the farmers who may be looking to my research to figure out if she has to worry about soybean aphids next season. It keeps science out of the hands of the policy maker that is writing regulations and guidelines about how to deal with invasive species. It keeps it out of the hands of the scientists in developing countries that are trying to crack a similar problem in their landscapes with resources much more limited than what we’re privileged to have here in North America. It wastes time. It wastes brain power.

Not having open access to scientific research hurts us. It hurts people. Many have been screaming it from the rooftops. The problem is many, many papers are ‘protected’ under copyright, and these copyrights are enforced by for-profit publishers. The law is, unfortunately on their side (at least in the US), which means they can, and do take vindictive action against those seen to be in violation of the law. The long-term solution is that these laws need to change- they need to change so they protect the interests of science, scientists, and humanity. Not large corporate publishing houses.

Why haven’t scientists all gotten together to take a stand? Well, they have. We have. Scientists are increasingly choosing to publish their work in open access venues.3 This is good. This is important. But there is still a heck of a lot of human knowledge still going into closed venues. The reason? Costs. Costs, measured in money, time and professional prestige. This is how they get us, this is how they persist.

The big publishing houses are nothing if not very clever with developing their business model. They have created a market where they are the arbiters of ‘quality’ of science that is self-reinforcing. Scientists are busy people. The higher rank you achieve,  the busier you are. Publishers capitalize on that by creating exclusive journals- essentially filtering4 the scientific product for the busy scientist. These contributions that make it past these filters are valued more in this paradigm, thus the scientists authoring these contributions are valued more, are more likely to be promoted. You can see how this value system propagates itself- and thus, there’s a direct incentive to buy into the system.

I mostly fight this aspect of the system by yelling at it. It works, sometimes.

The other disincentive, though, is something I struggle with more, because no amount of yelling helps.

In a lot of cases, publishing in open access journals (or paying for the open access option in a ‘regular’ journal) is prohibitively expensive at the individual lab level. PLoS One, for example, cost $1450 last time I looked. In a discussion on Facebook this weekend, a friend cited a $640 bill from PeerJ.5 You can often publish closed access for much less, or even free in journals under the purview of the big publishers. This can be difficult to justify when you don’t have a large research budget and you need to pay an extra semester of GRA stipend for a student whose experiment took longer than expected.6

The situation is even complicated  for small-medium society journals. For example, in the journal I published in most early in my career, Environmental Entomology, there is no publication fee for members of the society for subscriber-only access, but open access fees for the same paper start at $2000USD. I love the ESA and what they do, and I know that they use the revenues they make from both subscriptions and open access fees to support society activities- our annual meeting, scholarships for students, funds to help support parents in science- things I’ve personally benefited from. This model has always worked for the society and I know they’re hesitant to change it.

There are a few outlets and workarounds. For example, Royal Society Open Science currently doesn’t have publication charges (and they cover the cost of a data submission to Dryad!). Another friend  at another large American university told me that her library has a program to help researchers offset the costs of open access publication (you just need to apply for funds VERY EARLY in the fiscal year, because the money is snapped up quickly due to high demand). We can advocate this approach to our own libraries- eventually, the budget allocated to subscription fees could be allocated to open access charges instead. I feel like this is the most likely long-term solution. But the patchwork of current availability means that, unfortunately, change to an entirely open access model is not immediately feasible for many labs. Combine that with the professional disincentives, and it’s clear we still have a lot of work to do on the road to open access to scientific information.

*sigh*

This is the kind of thing that keeps me up at night, you know. But we will get there.


1. This may be because I’m scary when I start on about the morality of open access. But…You’d tell me if I was scary. Right? RIGHT?!
2. Please, email me for a reprint, my secret friend.
3. This content better not be paywalled. It would reach a critical level of irony. The world might implode.
4. Filtering for…well that depends. Some might argue they filter for the most sensationalized, oversold, and likely irreproducible science. I’d never make that claim without data though.
5. This is even more painful when you consider exchange rates. Canadian researchers are plagued with a weak dollar right now, for example.
6. You want this student to, y’know, be able to eat and live and stuff.

Posted in Uncategorized | Tagged , , , , , | 4 Comments

A book for all: Data Management for Researchers by Briney

If you’re a data management enthusiast like me (yes, we exist, and there’s actually a bunch of us), you’ve probably head about Kristin Briney’s Book, “Data Management for Researchers.” I received a copy for review a few months ago, and have been taking my time to savor it. But if you’ve heard of this book, chances are that although you’ll certainly find aspects of it useful, you’re probably the metaphorical choir that we, the data managers, are preaching to. You might even argue that there are lots of data management resources out there- why a book? But Briney does something unique here, and I have been enthusiastic to recommend it to  everyone around me.2

This book offers a fantastic overview of all things data management, and -here’s the really important part- explains it all within the value system which currently dominates academic culture. Often, open science advocates, myself included, approach persuading people to become better data managers for idealistic, esoteric reasons.3 This can make our arguments sound a little tone-deaf, because proposing a radical shift in practice without tying it to the realities and constraints the researchers face (i.e. no time, evaluation based on impact factors and grantsmanship)- well, for many of the practices we advocate for, it just looks like opportunity cost. The thing is, open science, data management, reproducible practice isn’t just that- and this book shows us why. Data management makes all researchers better scientists.

This is a book that an open science advocate can hand to an academic administrator or a new graduate student, and they can flip through, and think “Hmm, these practices will help make my life easier and help me meet my goals and succeed by the metrics used to evaluate performance in the paradigm in which I exist!” 4 Briney lays it all out. She starts by dedicating the book to the memory of data lost, and then, chapter by chapter, outlines another important concept in data management. Each chapter starts with an anecdote (often, a cautionary tale) about some aspect of the typical research lifecycle that is affected by data management. She advocates a baby-steps approach:

Remember that good data management need not be difficult or complex, but instead is often the summation of many small practices over a range of data-related topics. The best solutions are the ones that become a routine part of your research workflow.

This book covers data management from before collection – data management planning- to sharing and reusing other people’s data, with specific reference to best practices, constraints, and concerns a data creator or user may face. Each topic is covered comprehensively and grounded by real-world examples peppered throughout, making the material relatable. Not all sections will be directly relevant to all researchers (for example, I’ve never had to anonymize data because the IRB doesn’t care if insects get doxxed), but it was certainly enlightening to read (and I read that I could save myself a lot of trouble by never working with human data 🙂 ) but much of it is relevant to all (see: metadata). This book is really for all researchers- those that love data (treat your data right!) and even those that hate data and all things quantitative (handle your data efficiently so you have to spend less time on it overall!).

I, personally, will use this book in a variety of ways- primarily to supplement my own cache of cautionary tales and anecdotes about data management, and as a way to connect what I do for open science5 to what I do in the lab/field/office. But I also intend to give a copy of this book to each graduate student / trainee that joins my lab, should/when this whole faculty job search pans out and I get my own lab.6  Putting this all out there, getting students on board with the ideas and techniques to treat data with the respect it deserves, will help us all succeed, by whatever metric, whatever paradigm.

1. Read: being mildly negligent about reviewing it while traipsing around the globe like some sort of bon vivant.
2. Read: shoving it in front of people at bus stops, leaving it on the lunch table. I’m insufferable.
3. Save the world! Advance the field!
4. This is what the voice in other people’s heads sound like, right?
5. I do this sort of thing. Professionally. Not all the time. But some of the time.

for_science

That’s Kaitlin Thaney on the left. Photo by Joey K. Lee. Photoshopping by Richard Smith-Unna

6. #operationhiremeplease2016

Posted in Uncategorized | Leave a comment

A fundamental difference of opinion

If you had told me as a child that one day, I would be an entomologist writing a scathing retort to a New England Journal of Medicine Editorial, I probably wouldn’t believe you. First, I’d be confused because I didn’t know what an entomologist was, at the time, and secondly, because I was instilled with a strong sense of deference to authority as a child.  Authority wants whats best for us, collectively, right? Authority makes reasoned, evidence based decisions that helps society be functional and productive, and since I cannot be an expert in all things, I should defer to the authority in a given area.1 It’s how living in a community, in a society, works best.

I think this is why I get so mad, so personally affronted, when I observe people in positions of authority who aren’t acting in ways that support the greater good, and instead are taking painfully obvious actions to maintain their own authority over a group.

In case you haven’t read it, I’m talking about this editorial.

If there ever was an authority I’d uncritically defer to, it is the New England Journal of Medicine.

photo-1

Well, this is awkward. From Brembs et al: Deep impact: unintended consequences of journal rank

…er.

Seriously, though. When it comes to contentious issues in medicine, NEJM is certainly regarded as an authority. But their recent editorial on data sharing, well, baby, you’re in my house now.2

There are many, many reasons that data shouldn’t be shared, and most many open science advocates are quick to acknowledge these issues- the editorial touches on some of these points. The big, obvious ones I see are confidentiality concerns and situations where releasing the data would otherwise present a hazard to the subjects under study.3 However, there is also a really important dynamic that’s often unacknowledged- the interplay between open science, privilege and power. Terry McGlynn explores this issue in more depth on his excellent blog post, but it can be summarized as such- the people in the most precarious positions- the students, the postdocs, the people working at small institutions who don’t have the resources to support many irons in the fire- are the ones that face the most risk when data sharing. The established scientists with large budgets at large research institutions, and the infamy and clout to defend their research ‘territory’ (if you will) have disproportionately little risk by sharing data. Yet, most outreach activities in the open science community target early-career scientists,4 and the most vocal cries against data sharing I’ve seen have come from the most established of the establishment. I take no objections to these arguments, and am actively working within the system to try and mitigate these risks and issues.5

So, this all being said, there are two main points that I take6  exception to in the NEJM editorial.

A second concern held by some is that a new class of research person will emerge — people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited.

Read that again.

even use the data to try to disprove what the original investigators had posited.

Wow. So you mean to say that the data is only valid when used to support the collectors’ hypothesis?  Do we need to do a little bit of a review of the scientific method here?

This statement irks me for several reasons- first of all, it assigns some sort of social value on the hypothesis. Everyone likes to be ‘right’ but hypotheses never are- they are either supported or not supported by the data (within the frequentist paradigm at least). However, data supporting one hypothesis doesn’t mean that hypothesis is true- it just means it was the best hypothesis tested in the study.7   If another person comes along and uses the body of available data to formulate a new, better supported hypothesis, this is not something to get sore over- this is a sign the scientific process is working. I know, scientists are people with egos, but if you really believed that your paper, your hypothesis was the final answer, shouldn’t science, I dunno, stop?

But it doesn’t stop. I think if you want to be the final answer in science, then you don’t really want to be a scientist. You just want to win.

The second bit that gets to me is more personal:

There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”

This issue of the Journal offers a product of data sharing that is exactly the opposite. The new investigators arrived on the scene with their own ideas and worked symbiotically, rather than parasitically, with the investigators holding the data, moving the field forward in a way that neither group could have done on its own.

So, Hi! I’m Christie and I am a research parasite. I’m a pretty productive scientist for my career stage- partly because of what I often jokingly refer to as niche partitioning- I function as the data analyst on most of the collaborative projects I’m on.  I see this as a mutually beneficial, heck, symbiotic relationship- and I believe most of the collaborators and data creators don’t feel my role is parasitic, exploitative or derivative. Yet NEJM seems to think this sort of positive relationship is some sort of exception.  It also belittles the science I do-for example, one of my recent papers used three separate data sets, produced by others, for applications other than the data creators intended- and not all data creators ended up being authors on the final manuscript (although some did- based on their contribution to the scientific ideas, analysis and writing in the final paper). This paper is an original contribution to the literature that builds on the work of others, brings together the information we know about several domains to create new knowledge.  This is what I grew up believing science was about.

What, precisely, does the typical “research parasite” look like to the editors of NEJM , I wonder? Evil monsters, lurking in the shadows, taking fuzzy pictures of your poster presentation so they can copy your graph? That grad student who can smugly stats faster than you, so she can ANOVA the crap out of your RCBD and get into Nature? Certainly not human beings with lives and families, who are interested in how the world works, and want to use our existing body of knowledge to ask more meaningful questions. Nah, that would never happen.

This NEJM editorial is not just about data sharing. It is about the scientific establishment using  its power to foster the culture of fear and competitiveness that keeps them in power. And I, for one, am not buying what they’re selling.

1. this approach still tends to work reasonably well in the sphere of hair grooming. I usually just say to my stylist “You’re the expert, just do something low maintenance and flattering” and, as long as I don’t go to SuperBudgetCuts, the outcome is usually better than if I’d attempted to micromanage the situation.

2. If you take data sharing advice from me, you are not officially obligated to also take medical advice from me. Not that kind of doctor. No. Stop. I don’t want to see your rash.

3. Bahlai et al 2012, “A comprehensive listing of exact locations of endangered species often poached for the alternative medicine industry” in the Journal of Hypothetical Examples is, indeed, my most under-appreciated paper.

4. #OSRRcourse. Guilty as charged, officer.

5. This is a rant for another day, but I see the problem as there being a near total lack of incentives for open science practice within the traditional ways that scientists measure success. The establishment maintains control of these metrics, meaning that scientists who succeed by traditional metrics are the ones that gain power.  Basically, a positive feedback loop of establishment power, corporate interests in the form of scientific publication, and people rewarding only the people who think most like them. This is wrong, and we need to rise up against it.

6. expletive deleted

7. “All models are wrong, some are useful” GEP Box, I believe. Models, mathematical formulations of hypotheses,  are abstractions that can approach truth, but never really hit the truth asymptote, because nature isn’t neat and clean like that.  But when you have a frequentist paradigm test of a hypothesis, you’re typically rejecting a null rather than directly testing your hypothesis. So basically, you’re saying “Well, it’s not NOT my hypothesis so my hypothesis is supported.”

Posted in Uncategorized | Tagged , , , , , , | 12 Comments

Ready, set, TEACH.

The semester starts in one week’s time. Before the break, I was furiously working, working, working to get the Open Science and Reproducible Research course at least skeletonized. This week will be all about sorting out tasks I need to get done before my trial by fire   first time teaching an entirely new course begins. My plan is to make this course VERY discussion based and open form (hahaha of course it’s open form)- essentially hit the students with reading materials which we will discuss together in class, and activities we will do together, and have some time in each class period devoted to supported work on our real open data set. The class is small and the students are pretty much all known to me so I think conversation should come fairly easily.

The folks at the Mozilla Science Lab have had us fellows work on a number of exercises to help us focus on our respective projects. Since my stream-of-consciousness tends to come out best here on this blog, I’ve decided to hybridize these assignments with blog posts. So to start- here’s some reflection on what I’m doing and why.

Challenge

Science does not end up in the hands of the people that need it. Within academic science, data and analytical techniques are not shared freely, making it difficult or impossible for other scientists to reproduce or build on their work. People working outside western academic science have even more trouble, because they typically do not have access to academic publications, the end points of much academic research. This means the people that need science most- the ones making decisions that affect human health, livelihoods, and the environment, do not have access to the information produced by scientists to help solve these problems. This problem can largely be solved by academic science moving to an ‘open’ model, where scientists use the tools and connectivity available to them through the internet to document and share all steps of the scientific process. However, academic science lacks the infrastructure to train scientists to learn to use these tools, and lacks a regulatory or reward structure that makes it appear worthwhile to change their established approaches.

Scope and scale

Closed practice is pervasive in academic science. At every level of rank and organization, the infrastructure is built to not particularly value open practice, and sometimes outright deter it. The culture of academic science re-enforces secrecy- I remember even as an undergrad, working as an assistant in a research lab, hearing conversations between grad students about their concerns that their work would be ‘scooped’ by others. There was an oral tradition where students passed down this message- that science was primarily an adversarial pursuit- you had to hold your cards close, lest your competitors use your data to solve their problems before you. These messages get reinforced as a student passes along through the pipeline and through the academic ranks. Frankly, high impact factor papers (typically in closed access journals) and grant funding are the currency of success in academia- there are few recognitions for inclusivity or reproducibility. Because of these incredibly dominant cultural aspects, I believe the key to changing the culture is through gentle shifts in regulation and the reward structure- and then aim for the bulk of the change to occur in early career scientists.

Refined problem statement

Closed practice hurts science while benefiting only a small subset of individual scientists. Open science can increase diversity and participation in science, while fostering the process of science itself by improving reproducibility, but requires training, advocacy, and a reward system.

Reflect

Most scientists agree that learning to use technology to improve the reproducibility of their work is a good thing, but there is a lot of pushback against open science in my field for two big reasons.

  1. the learning curve associated with taking a whole new approach to science- it’s not trivial. With each step on the academic ladder, individual scientists have less time to spare, and approaches to problems are more and more ingrained.
  2. There are risks to open practice, both perceived and real, and rewards can be difficult to quantify under conventional academic metrics. The cost: benefit ratio varies with field, career stage, institution, and many other factors.

The first factor, I feel, is fairly easily addressed. Academics are used to doing things that are hard. Offering training in open science early in their careers makes learning it less hard, and then they can follow the path as they grow as scientists. I’m less able to address the second point because these are real structural problems that are harder to overcome. I feel like we need to change the value system- how people are evaluated- in academia to tip the ratio on these cost-benefit analyses.

Brainstorm

I feel like other scientific fields are further along on the path to open science in some ways. For example, in more quantitative sciences like physics, for example, the slope of the learning curve towards using more technological approaches to science is damped, because a lot of that field is computational. Fields like physics are also known for massive collaborative projects (think the large hadron collider), meaning the expectation is already there that people working on an aspect of a problem will share their work so that collaborators can build on it. However, answering a lot of questions in organismal ecology is still possible with the work of a lone scientist, carefully making observations on her own, so open collaboration isn’t necessarily pushed as a part of a training program. I think one of the key factors in bringing open science to organismal ecology involve breaking down the hesitance towards technology I’ve observed among many people in my field. To do this, I think the best approach is to start small- show them simple, small steps they can take that make their lives easier or more efficient- be it better documenting their data, scripting an analysis so that it automatically processes observations from a new experiment, or making their contributions more easily integrated with a collaborator’s work.

Transformation

Open science has the potential to change both society and academia, for the better. It will place scientific evidence into the hands of people who need it most, from people working on more efficient agricultural systems in developing countries to people who want to learn better, evidence based ways to treat medical conditions. It will create an environment where scientists build on each other’s work, and can draw on the skills and ideas from the broader community.

Posted in Uncategorized | Tagged , , , , | 1 Comment

The Open Science and Reproducible Research course, Diversity in science, and other happenings

We’re moving forward on the OSRR course, so YAY.  I’ve got a syllabus and schedule drafted- you can comment on it here. If you’re listed as a guest speaker, SURPRISE! 🙂 Note that this is a  tentative class schedule and we’ll probably move things around a fair bit. I’ll be in contact.

November was a whirlwind month.  Not long after returning from Fellows onboarding in New York in October, I was whisked away to MozFest in London, then had a few days of down time before travelling to the Entomological Society of America Meeting in Minneapolis.  I’ve been spending most of my non-travel time working on getting my balls back up in the air, so to speak.  Job applications1, students2, paper reviews, designing a course, administrative stuff, dying aphids3.

Some brief updates from the land of Dr. B:

  • going to the two meetings back-to-back, I was REALLY stuck by the differences in demographics I observed between the open science community at MozFest and that I observed at the meeting in my home academic field. To put it frankly, the diversity of people interested in, and able to contribute meaningfully to science that I met in London does not match the diversity of the people employed as professional scientists. This is a problem we should ALL be concerned with solving.
  • I am working on a book review, because I’m the type of person that’s asked to do book reviews now. Isn’t that COOL?!5
  • I’m very thrilled to hear the word ‘open’ coming more and more from the government of my home country since the election. I will have more to say about that soon. But in the meanwhile, keep up the good work, Justin! 6

Exciting things are afoot! Stay tuned!


1. #operationhiremeplease2016 #postdoclife
2. or as I like to call them, “future case studies in reproducible research” which the students really like, in practice.
3. I think it’s a humidity thing. But the ladybugs have got to eat, so this is A Problem.
4. To put it into jargon my fellow ecologists can understand,HMoz >>> HEnt, on trait axes of gender, race, age, ethnicity.
5. My inner teen aged nerd self feels clever and powerful.
6. I am not a one issue person. Now, hold still while I talk at you about how open science relates to me buying a share in my local CSA….

Posted in Uncategorized | Leave a comment

Course description for The Open Science and Reproducible Research course (for bug counters + people that count other things too)

Comments are welcome- here’s a draft course description for the course I plan to teach.

Are you a grad student interested in developing reproducible research skills? You should sign up for:
Open science and reproducible research
I’m developing an open science course for graduate students, based on using legacy and existing datasets. At the beginning of the course, groups of 3-5 students will be given (or choose) a legacy dataset set, and together, we will work through the process of data cleaning, documentation, sharing, data manipulation, analysis, interpretation, write-up, and eventual submission to a journal, all using open science and reproducible research practices. The course will be delivered as a 2 credit hour Special Topics in Entomology course (ENT 890), is open to ENT, EEEBB and LTER grad students. Enrollment is strictly capped at 12, so signup today! Meeting times TBD, based on student schedules. For more information, contact cbahlai@msu.edu

Reasons to like this:
• Students get experience with academic publishing early in their programs, including authoring a paper!
• Helps deal with backlog of under-utilized data generated by large scale projects
• Training which allows labs to get in compliance with new guidelines from federal agencies
• Guest speakers include leaders from the open science community
• Gets a new generation of scientists starting out on an open science foot.

Topics to be covered:

• Open Science
• Data hygiene
• Data cleaning (for unhygienic data)
• Metadata
• Ways to share data
• Analysis workflows
• Intro to scripted analysis
• Intro to R
• Integrating R and version control software
• Collaborative coding
• Open notebooking
• Collaborative writing
• Submitting a manuscript to an open-access journal
• Data creation and authorship
• CC licensing
• Understanding other people’s data

Posted in Uncategorized | 3 Comments