ForumsWEPRWhat is Science?

83 35423
aknerd
offline
aknerd
1,416 posts
Peasant

Hello!

Most of the threads I have made regarding scientific topics have failed because we all come from different backgrounds, making it hard to find a common topic we can actually debate about. But, looking over the types of people the post in the WEPR, I still feel like there is a general interest in science, so, following Moegreche's lead, this is going to be more of a philosophical discussion.

Feel free to join in even if you have no formal scientific education, everyone's opinion can contribute something worthy here. This is intended to be more of an opinion based thread, I'm interesting in what other people think about science. Feel free to make a new thread, however, if you want to talk about how science is different from religion. While it might come up here and there in this topic, religion is really beyond the scope of this thread.

Okay, now that you've read the fine print, lets get down to business. The first questions I would like to address are these:

1) Where do you think science comes from?

2) What is a typical scientist? How do you think one becomes a scientist?


I would to focus on current science for these questions, by the way. Current being the last 20 years.

  • 83 Replies
aknerd
offline
aknerd
1,416 posts
Peasant

you could lie, but then any replication of the experiments using the described methodology will yield different results.


Not necessarily. It is very possible to be correct on accident, or (more likely) you know your data were bad, so you fudged it to make it "good". Like, maybe Bob's results actually showed that some chemical had no effect on some organism, even though the chemical is a known carcinogenic. So Bob fudges the data and BAM his subjects got cancer.

Some more capable scientist replicates Bob's study, and actually get the results Bob fabricated for publishing. So... what then? In my opinion, Bob is still a bad scientist and his original paper (and all future and previous research) should be called into question. But how do you find him out?

Then, there is the much larger problem of trying to determine whether or not differences in results from two studies are because one of the studies messed up, or just because of random environmental noise. In certain fields (eg virtually all of ecology) this is a pretty big deal.

Like, if you give two chefs the same recipe, they will produce two different dishes, right? And one is going to be better than the other. But even though they used the same recipe, that doesn't mean that that chef is better, because they used different ingredients, different kitchens, etc. Furthermore, the dish that appeared to taste better might have been sneezed in. Multiple times. Point is, there is a high level of uncertainty that comes into play when you are trying to access the validity of a scientific paper.

methodology has to be explained

Yes. But the methods that were carried out over a long period of time, potentially decades, get condensed into just a few pages in the journal. They certainly don't account for every little thing that happened over the course of the study. Lab notebooks are certainly an improvement, but oh god the handwriting.

It's a model that has worked fine so far.

I agree. I'm just trying to establish the importance of trust within the scientific community. Its important to know that if someone is wrong, it is not because they are trying to be deceitful, but because the theory itself is wrong. There's some article I showed Mage awhile ago that's actually a pretty good example of this in action...
MageGrayWolf
offline
MageGrayWolf
9,462 posts
Farmer

Science is the exploration of how the world works, and developing, proving and disproving theories about this


Can't really say a theory is proven. It has to remain tentative to be flexible to new data input.

At which point it essentially becomes Bob's word vs Barbarella's word. Both scientists could have released contradictory papers, but both papers could be completely logical and claim to use accepted methodology. Which either means one scientists could have lied about their methods (but which one?), or that one (or both) of the methods was actually not that great to begin with, or the data were bad, or that BOTH scientists are wrong, or that both scientists are correct and the contradiction isn't really a contradiction, or...


When we have such a situation we often come to the conclusion that there are further missing pieces to the puzzle we haven't touched on. This is situation is usually faced at a hypothetical level rather than an experimental one. It would be the point we would honestly be saying "we don't know".

Think about it- if you are a scientist and want to make a name for yourself, or secure grant money, or whatever, are you going to try to redo someone else's exact experiment? Or would you rather try something new?


From my understanding the big hitters are the ones trying to poke holes in other people's stuff, finding those wholes and patching them. Thus resulting in the something new.

Not necessarily. It is very possible to be correct on accident, or (more likely) you know your data were bad, so you fudged it to make it "good". Like, maybe Bob's results actually showed that some chemical had no effect on some organism, even though the chemical is a known carcinogenic. So Bob fudges the data and BAM his subjects got cancer.

Some more capable scientist replicates Bob's study, and actually get the results Bob fabricated for publishing. So... what then? In my opinion, Bob is still a bad scientist and his original paper (and all future and previous research) should be called into question. But how do you find him out?


So in such a case he came up with a model that works by accident. I fail to see how trust is needed when we can replicate the result and get a correct answer, even if the method originally used was flawed, the outcome was not. That outcome is really the part that matters in the end. Of course it's far less likely that Bob could get away with being right every time.
Think of it like a member of a sports team cheating. It's possible the final outcome of the game could have had the same results, even if the team member didn't cheat and get away with it. It would have just come to those results by the actions of another member. Though it's unlikely that the team member could consistently cheat in every game and get away with it which can have grave consequences for that team member.

Point is, there is a high level of uncertainty that comes into play when you are trying to access the validity of a scientific paper.


Isn't it also the point of science to try and eliminate as many variables as possible? Doesn't one require controls to be effective?
HahiHa
offline
HahiHa
8,255 posts
Regent

Yes. But the methods that were carried out over a long period of time, potentially decades, get condensed into just a few pages in the journal. They certainly don't account for every little thing that happened over the course of the study. Lab notebooks are certainly an improvement, but oh god the handwriting.

The methods have to be given such that they are reproducable. You don't need to explicitely give every detail because some procedures are either standardised (some lab procedures or statistical tests), or have been recently developed, in both case you just say "we used this method as described in Sample et al. ", for example. Often it's not just the methods, but also the terminology; in my field it is important to notify which previous work I'm basing the terms on.

Of course you're right in saying it doesn't prevent slight data manipulations; what is often done is that only the data that gives nice results is presented, which is technically not wrong, but it just doesn't show the whole picture. But you as a reader of the paper have to be aware of that and of many other things like that; not saying it is the readers responsibility, but it wouldbe stupid to ignore it. Newspapers often misinterpret or misrepresent some things when making an article (even more if the article is based on previous press releases and not on the original paper...)
aknerd
offline
aknerd
1,416 posts
Peasant

The methods have to be given such that they are reproducable.


"have to"? I don't know about that. I suppose that it depends on how you define reproducible. I would say that the methods should be given such that they are theoretically reproducible. Because, again, in many cases reproduction simply isn't feasible.

"we used this method as described in Sample et al. ", for example

Which is actually a whole other problem in science ethics. Because (and note that while this practice is common, it is discouraged by most journals) many papers refer to another paper in their methods, but then that paper actually refers to still another paper in their methods, and so on. So you get this game of telephone going back to the original paper, whose methods may not be what you are intending to cite. Its not hard to find examples of this even in well respected journals.

But like I said, thats a whole other issue.

I fail to see how trust is needed when we can replicate the result and get a correct answer, even if the method originally used was flawed, the outcome was not.

Okay, then we clearly disagree on that point of ethics. What does everyone else think about the matter? This is a pretty interesting topic. I would think that it is clearly unethical if someone fabricates data, no matter the outcome. But does it actually harm the scientific process if the outcome is what would have happened anyway?

I say yes. I understand your sports analogy, Mage, but I think I can supply essentially the same argument to a different topic, but get different results.

Suppose there is a crime scene, a grisly murder, and the actual murderer is in custody. But the police are having a hard time finding evidence. So a detective fabricates evidence and the murderer is convicted and gets a life sentence. Okay. That's some serious business. On the one hand, its great that the murderer got caught. From that perspective this story has a happy ending.

But its not about the murderer, its about the detective. Do you want to live in a society where detectives can fabricate evidence and get you thrown in jail for the rest of your life? Furthermore, what if it eventually comes out that the evidence was fake? Then the murderer would be released, and it would be much harder to re-convict him due to double jeopardy and all that.

Science is serious business. Scientist Bob didn't actually KNOW what the results of his study should be, but that didn't stop him from making up data. It turns out he was probably right, but that doesn't matter. If it is ever found out that he made up the data, then his results will obviously be called into question and the study will have to be repeated again, wasting further resources. All of the studies that cited Bob's study could also be at risk.

People like Bob have to be taken seriously because, on a purely economic level, they are a huge waste of resources. And the progress of science is very much resource limited. By punishing/ostracizing people like Bob, you cut down on this waste, and science as a whole progresses more rapidly. If we are able to trust that other scientists act ethically, we are able to automatically cross out a large source of error and save a lot of time. Or at least that is how I honestly see things. Thoughts?
HahiHa
offline
HahiHa
8,255 posts
Regent

"have to"? I don't know about that. I suppose that it depends on how you define reproducible. I would say that the methods should be given such that they are theoretically reproducible. Because, again, in many cases reproduction simply isn't feasible.

Well, yes, theoretically in the sense that someone reading the paper can understand what you did. There are cases where it may be a bit foggy and just precise enough to be published, and yet other cases where the info is there but the writing is not clear; but theoretically, the methods have to be concise enough for replication (assuming one would have the afforded material and data). A problem here is it may depend what journal is willing to publish it.

Which is actually a whole other problem in science ethics. Because (and note that while this practice is common, it is discouraged by most journals) many papers refer to another paper in their methods, but then that paper actually refers to still another paper in their methods, and so on. So you get this game of telephone going back to the original paper, whose methods may not be what you are intending to cite. Its not hard to find examples of this even in well respected journals.

I don't see any problem with this as long as you search for the original paper. Of course you have to follow citations back to the original work, that's clear. If you do this, it's ok.
Besides, on one hand, there are certain reference papers that coined a certain terminology, for example, that are well-known among those working in that field. On the other hand, I'm aware you cannot just cite a paper instead of the methods section; you still need to give the specifics of your own work progress (like what solutions you used and stuff like that), that doesn't hinder you from saying you did it like in that other paper.
Moegreche
offline
Moegreche
3,826 posts
Duke

I was thinking about this notion of reproducibility. This seems prima facie like a fairly reasonable requirement for proper scientific pursuit. But there are results, as has been mentioned, that aren't reproducible. So we make the move to something like theoretically reproducible. But what does this mean?
When people say something like theoretically p they usually mean that p isn't practically possible, but is theoretically possible. I'm still not sure what that means, though.

So suppose I'm a palaeontologist and I find dinosaur bones at location L. After recovering the bones, I determine that this particular dinosaur's bones being at L is significant. Perhaps this species wasn't postulated to have lived in the climate, or perhaps the location - which also relates to the time period in which the species existed - suggests that our beliefs about this species are false.
The claim here is that species S was found in location L. There are implications (as I just mentioned) that result from this discovery. But it seems to me that even on a theoretical level this result isn't reproducible.

Or suppose there is some enigmatic particle of which there is only 1 sample. We run tests on this particle to determine its properties but these tests will destroy the particle. Certainly someone else could have done the experiment and gotten the same results. But the experiment itself isn't reproducible. The particle is destroyed.

So here's my question. It seems like we want something like reproducibility when we develop and run scientific experiments, but there are acceptable exceptions to this rule. So what is the rule? The responses running around in my head are either far too weak or far too strong.

HahiHa
offline
HahiHa
8,255 posts
Regent

Perhaps this species wasn't postulated to have lived in the climate, or perhaps the location - which also relates to the time period in which the species existed - suggests that our beliefs about this species are false.

* incomplete. Makes a difference. Finding fossils adds information. Assuming the species did not live there/then was a per se assumption (if even a positive assumption) due to lack of information.

The claim here is that species S was found in location L. There are implications (as I just mentioned) that result from this discovery. But it seems to me that even on a theoretical level this result isn't reproducible.

Not reproducible as in finding the same fossil again. But reproducible as in finding further skeletons of the same species in the same layer, which is practically possible. Adding on it, the fossils will usually still be embedded in a block of sediment whose structure can be analysed and compared.

But I get what you're saying. In this case, checking on every fossil would be too much of a strain; you just accept that the locality written on the note is the true finding locality. You also assume that there must have been a team working on it, all of which can vouch. Lastly, a fossil without information on locality and age is worthless; you assume everyone has an interest in that aspect, as the locality can potentially even yield an important discovery (or you cheat and create a false sensation, but the more unusual the find is, the more it will be doubted and looked at in detail).

.. sorry, this is kinda my field of study, so...
Moegreche
offline
Moegreche
3,826 posts
Duke

Those are all really strong points. And to be clear, I'm not at all trying to debase or debunk scientific practice. I have a deep respect for scientific practice, but we're trying to get at the nature of it all.

Perhaps the destroyed particle is a better example - although in both cases there appears to be a tremendous amount of trust that studies we read are accurate, well-informed, and up to the standard practices of the field. But again this brings up the question of what exactly are the standard practices in the field? Reproducibility seems to be a good starting point, but this notion is clearly defeasible. Maybe I'm just getting too philosophical with things, but I think these are important questions to which scientific practice must provide an answer. Or (better yet!) it's up to us philosophers to help cultivate such an answer. Nonetheless the challenge needs to be met. Why is it that we care about fossils with dating? Presumably it's because that information fits into a wider spectrum of understanding. But this just brings in an earlier point I was trying to make: do scientific practices reflect our overall epistemic goals? Or do I have this backwards - maybe the development of science (broadly construed) has presented a constraint on epistemological goals we have.

HahiHa
offline
HahiHa
8,255 posts
Regent

You're right, in the end there's always a significant level of trust needed, especially in novel fields/experiments. I guess the standards vary strongly from field to field, and they are not fixed; some practices might be well established while others are quite experimental, and all are subject to change. Essentially, we trust that with enough control, the results must be somewhat right, and that despite all its bickering, the scientific community will unveil scams eventually.

Thinking about it, apart from reproducibility, the time factor is also important, as research based on false information is a frail construct, and as soon as it gets too diffuse or headed into the wrong direction when compared to further information, it falls apart (causing potentially a lot of collateral damage). So not only the reproducibility of the actual experiment, but of all that ensues the conclusions are important.

Mickeyryn
offline
Mickeyryn
276 posts
Shepherd

Yay! It is another one of these you-must-think-deep-down-because-this-is-a-major-philosophical-discussion. Joy. Don't mean to hate, but I am just putting it out there that I have read enough things on deep-down-philosophy to last a lifetime.

... :] ?

aknerd
offline
aknerd
1,416 posts
Peasant

So here's my question. It seems like we want something like reproducibility when we develop and run scientific experiments, but there are acceptable exceptions to this rule. So what is the rule? The responses running around in my head are either far too weak or far too strong.


So. So far, in your post, we have two types of legitimate studies- reproducible studies, and theoretically reproducible studies. But, as you pointed out, the latter of the two is a somewhat nebulous concept. And, as I'm about to point out, so is the first.

In order for a thing to be objectively reproducible, it must be perfectly reproducible. But think about it. So you have a study where you do thing A, get results B, and make conclusions C. Someone wants to verify C, so they will try to replicate A in order to see if they get B or not. But A can never happen again, because part of A includes the time and place that A happened. A can at best be approximated.

And so really, the first category is not reproducible, but rather sufficiently reproducible. Which is a very subjective term. Reproducibility is subject to, among other things:

general consensus of the scientific community
The strength and nature of the claims C that were made from data B
The context of A and B
How well A is explained

I think the input of the community is often downplayed, but it is a very important component. It comes into play largely via the scientific publishing system, which is based on peer reviewing and significant editorial control. Basically, it all comes down to what is and is not "okay" in terms of publishability. Often, things are "okay" until someone demonstrates why it is not okay, which doesn't exactly follow the ideals of the scientific method.

An example, in case I've been rambling too much:

Say someone does some study looking at the effect of diet on the common rat, rattus rattus. At the time, its considered okay to try to reproduce a study as long as you use the same species. So, one would NOT be reproducing the example if they used mice instead of rats.

But then, someone comes along discovers that rats from different parts of the country react differently to diets. Which means that in order to replicate the example, you would need to use rats from region that the original researcher got rats from.

But. It also depends on the claims that the researcher made- did he say that his results applied to all rats, or just the rats he used in the study?

And here's the kicker- if at the time the researcher did not think that regionality was important, he might not have even listed where his specimens came from. Which, given the new evidence, significantly changes the reproduciblity of the experiment.

TL;DR:
Reproducility is not an innate characteristic of an experiment, but rather subject to a variety of forces.
Moegreche
offline
Moegreche
3,826 posts
Duke

What a lovely argument, aknerd. This looks like a home run to me. Of course, this raises the point that scientific study might need to bring in non-empirical considerations such as the simplicity, elegance, and applicability of a particular methodology..
This is a well-recognised feature of the outcome of scientific practice. But it looks like you've moved the point ever farther back in the process of scientific investigation. This is not to say that radical skepticism is the proper response to these considerations. Perhaps this is a question we can justifiably 'unt' on.

MageGrayWolf
offline
MageGrayWolf
9,462 posts
Farmer

In order for a thing to be objectively reproducible, it must be perfectly reproducible. But think about it. So you have a study where you do thing A, get results B, and make conclusions C. Someone wants to verify C, so they will try to replicate A in order to see if they get B or not. But A can never happen again, because part of A includes the time and place that A happened. A can at best be approximated.


If I'm not too mistaken isn't the point to try and reproduce the results to the best of our ability? I will agree that this may touch on one of the limits we have to science, but it seems to almost be delving into an absolutism.
Minotaur55
offline
Minotaur55
1,373 posts
Blacksmith

1) Where do you think science comes from?


Science has no beginning nor end. It just is. It is bound to everything and everyone. It exist whether one knows it is there or denies it's existence. It can be no more, no less.

2) What is a typical scientist? How do you think one becomes a scientist?


These seem to be two completely different questions. I'll answer these separately.

What is a typical scientist?

A typical scientist refers to the norm or stereotype a human being portrays in the art of science. And because science has so many fields, there really is no cultural expectation one can logically have over a scientist. It's a broad term.

However most scientists do have a behavior that one might say is typical or a cliche. One of which being very smart, obviously. Another being somewhat judgmental. This can root into many different forms of ideology.

Being nerdy however is a artificial cliche of scientist world wide. Being smart does not mean you will have issues in social activities. This is the same case with intellectualism. This just means that the things one person deems important can also be unimportant to another human beings.

How do you think one becomes a scientist?

In general terms: You have an interest in science. It can't get any simpler than this. It can also root from the deep interest and desire for solving mysteries. Interest, intrigue, and curiosity is generally what drives most scientists. Finding new organisms, forms of energy, and all alike are generally enough to make a man or woman become a scientist.

In the sense of it being a profession, it roots from a strong understanding of how the universe works. It can also root from a strong urge to teach others. In the form of having it be your profession you need to have the type of brain where you can analyze information for long periods of time and want to solve a issue with excitement and determination. In the form of teaching, it can be as simple as wanting to teach. The reasons really vary per person. There is no real answer one can give as to why someone wants to do something. The answer can change at any moment and at any time.
aknerd
offline
aknerd
1,416 posts
Peasant

If I'm not too mistaken isn't the point to try and reproduce the results to the best of our ability? I will agree that this may touch on one of the limits we have to science, but it seems to almost be delving into an absolutism.


I think that this is a good point, and its really what I was trying to get at. What I was trying to say was that "objective" reproducibility (or what could be called absolute reproducibility) is just unrealistic. Or rather, impossible. So, it makes sense to just altogether reject that notion of reproduciblity. But then, a new definition of reproduction is needed.

So. I think a potential working definition could be somewhere along the lines of "A scientific study is reproducible when one could conceivably conduct a seperate study that, if the initial study was valid and the second study was executed properly, would be likely to produce similar results."

So, lets break this down. First, we the study has to be "conceivably conductable such that...". Another way of saying this is that the second study is sufficiently similar to the first. But it doesn't have to be the same. As I said before, this notion is heavily influenced by the consensus of the scientific community. So, it could be that there is good reason to believe that there is an alternate way to conduct the study that should lead to similar results.

Next, we have this If statement. This is important, because it tells us what conclusions we can make from the results of the second study. If the results match the initial study, it means that either
1) both studies produced trustworthy results
or
2) neither study produced trustworthy results

The more likely option is that (1) is true, which means this result would support the conclusions of the initial study.

If the results are NOT reproduced, it means that at LEAST one set of results is not trustworthy (it could be both sets again). In this case, the authors can argue which results are invalid.

This definition is still missing something, in that it does not state when it should be applied. Really, in practice reproducibility is something that only comes into play with scientific experiments, not all studies in general. More "descriptive" studies don't really need to be reproduced, because the data should more or less speak for itself.

In the case of Moegreche's palentologists, this was really more of a scientific discovery. There was no experimental element. That would only be the case if, say, the researchers went back in time and placed dead dinos in different locations to see where their fossils would end up in present time.

You can search for more dinos, but you aren't trying to reproduce the initial study. If you find another dino at location L, you didn't replicate the initial study, you expanded it. There's kind of a subtle difference there, but I'm not sure how to explain it.
Showing 16-30 of 83