Remote Viewers Correctly Predict the Outcome of the 2012 Presidential Election “An expedition into the unexplored territory of remote viewing & rating human subjects as targets, within a binary protocol”

By Debra Lynne Katz (www.debrakatz.com)

This Peer Reviewed Paper was accepted as a full Paper into the International Parapsychology Association 56th Annual Conference in Viterbo, Italy, where Debra presented it.

It was published in the Spring/Summer 2013 issue of Aperture Magazine, published by the International Remote Viewing Association. Click here to get the link from my website to get the article as it appeared in Aperture.

(Full Citation: Katz, Debra & Bulgatz, Michelle. Aperture Magazine, Spring/Summer 2013 Issue. Page 46 to 56).

Remote Viewers Correctly Predict the Outcome of the 2012 Presidential Election

“An expedition into the unexplored territory of remote viewing & rating human subjects as targets, within a binary protocol” by Debra Lynne Katz

Lead Researchers/Project Managers: Debra Lynne Katz/Michelle Bulgatz
Analysts – Statistician – Analytic Tool Developer: Alexis Poquiz
Report Edited by: Jon Noble
Remote Viewers:
Michelle Beltran, Jon Noble, Deborah Sherif, Laura Shelton, Paul Hennessy, Patsy Posey, Dolphin, David Beatty, Dan Hofficaker, Jason Brown, Russ Evans
Abstract – Researchers designed a project to determine whether 11 remote viewers, utilizing a double blind protocol, could describe a human subject in enough detail so raters could choose between 2 potential candidates in order to predict the outcome of the 2012 United States Presidential Election.

Remote viewers utilize intuitive yet structured protocols to obtain information that lies outside their analytic mind or current knowledge base.

Unlike other intuitive disciplines that focus on human subjects, these are the least utilized targets in remote viewing.

Researchers set out to answer (1). How strongly the viewer’s candidate preference effected their session? (2). How a project involving a human target differs from those utilizing objects and locations? (3). Is use of human targets in remote viewing related research projects or applied precognition projects involving binary outcomes something that researchers or project managers may want to consider? (4). Which of session rating method/system is the most helpful with human subject targets? (5). Why are human subjects targets typically not utilized in formal RV research studies when they are quite often the main focus for intuitive practitioners?

Methodology: 11 remote viewers were tasked only with “The target is a person”. Sessions were turned in one week prior to the election. Each word and sketch from each session was input into a spreadsheet, and compared to both candidates with the use of the Targ Scale, then the more sensitive Dung Beetle System. After the election, viewers were informed that they had been tasked with viewing the elected candidate, President Obama. Later viewers were surveyed for their candidate preference. Once the scoring had been completed, the results were sent to Alexis Poquiz who calculated the percent that matched (Correct), did not match (Wrong) and that were Unknown for both candidates.

Findings – Out of 11 sessions, 8 matched Obama, and 3 matched Romney. The ‘Lower Q%’ score also yielded an overall group prediction for Obama. The viewers’ preference for a particular candidate was compared to their judged prediction. 7 out of 11 viewers indicated a preference towards a particular candidate. All 7 voiced a preference for the candidate that their session pointed to, including one whose session pointed towards the wrong candidate.

Conclusion – (1). Human targets are more challenging to rate than location/object based targets due to inherent similarities between humans; viewer’s subjective relational descriptors; and rater’s personal biases perpetrated by competing media outlets and an inability to perceive a subject’s inner life in the way a remote viewer can. (2). Human targets in remote viewing related research projects or applied precognition projects involving binary outcomes should not be considered unless only one option target in the pairing includes a human. (3). Poquiz’s Dung Beetle Scale proved itself to be a superior rating tool. (4). Viewer preference may be as problematic as telepathic overlay in remote viewing research and projects. Utilizing a blind protocol does not and cannot control against this.

Introduction

In early October, 2012, Michelle Bulgatz and Debra Lynne Katz designed a project to determine whether or not remote viewers could accurately predict the outcome of the upcoming presidential election which was to take place on November 5th, 2012.

With the primary elections completed, it was clear that the two candidates in the final election would be the current incumbent, President Barack Obama, and Republican front runner, Mitt Romney (barring any unforeseen circumstances). Polls indicated it would be a very close race.

This informal experiment set out with the following hypothesis and research questions:

Hypothesis

Remote viewers from a variety of backgrounds, even with little experience viewing human targets, will be able to predict the outcome of the next presidential election when utilizing a double blind protocol.

Research Questions

  1. How strongly will the viewer’s candidate preference effect their session?
  2. How will a project involving a human target differ from those utilizing objects and locations?
  3. Is the use of human targets in remote viewing related research projects or applied precognition projects involving binary outcomes, something that researchers or project managers may want to consider in the future?
  4. Which method/system of rating/judging sessions is the most helpful when evaluating sessions with human subjects as targets?

Background and Participant Selection

Remote viewers utilize intuitive yet structured protocols to obtain information that lies outside their analytic mind or current knowledge base. Information comes to them in the form of images, words, sounds, smells, physical sensations and emotions.

Several of the viewers that were invited to participate in this project are experienced and trained in a variety of methods such as Controlled Remote Viewing (CRV) or Extended Remote Viewing (ERV). These methods were originally developed for and utilized by researchers/remote viewers serving in various secret U.S. military/government programs, who later went on to teach them publicly once the programs were declassified. Several of the viewers are current or former members of the International Remote Viewing Association and were trained by former military remote viewers themselves. Some of the remote viewers were trained in Clairvoyant Reading methods that can be found in Debra’s books; “You are Psychic: The Art of Clairvoyant Reading & Healing” and “Extraordinary Psychic: Proven Techniques to Master Your Natural Abilities”. A couple of the viewers were new to both methods, having only done one or two sessions prior to this study.

Why Choose a Human Target?

Unlike other intuitive related disciplines that focus on human subjects, they are the least utilized targets (“target” simply meaning that which the viewer is assigned to view) in remote viewing (RV) practice, research and applied precognition projects.

In fact, the only recent psi related research that could be found was that conducted by Dr. Julie Beischel, who over a ten year period studied the accuracy of mediums by having the “sitters” they worked with, compare information intended for them to information intended for a test subject through application of a 5 point rating scale.

Although some of the viewers participating in this project have done hundreds of sessions, most of those trained in the CRV or ERV methods have little experience with viewing human targets directly.

This isn’t to say these viewers have not had experience describing humans. On the contrary, when one is tasked with viewing a location or activity at a location, humans are often present, which the viewer will successfully describe. However, most of the time, at least with practice targets in which a viewer is given feedback upon completion of his/her session in the form of a photo, text, or even video, the main tasking (objective assigned by the project manager or teacher) is to describe a location or object, or activity the human is engaged in as opposed to the more personal aspects of that human. In most remote viewing practice sessions, given the surrounding environment is the focus, the human is often explored by the viewer more as a means to an end rather than the end itself, in that the human’s emotions, actions, clothing, demeanor and words can shed light on what’s going on around him or herself.

Conversely, those trained in clairvoyant reading methods, primarily DO “read” people rather than the locations or objects, although there is some cross over as people are impacted by, or are curious about their locations.

For this study, some of the viewers were remote viewers, some were clairvoyants, and a couple were trained/experienced in both disciplines.

Note, despite the above discussion, the goal of this study was not to compare/contrast viewers’ training methods. In order to do that properly, we would have had to control factors such as the length of time and number of hours spent training and practicing. We would have to find viewers who were only trained in one particular method, which is difficult as the current generation of viewers tend to explore different modalities with a variety of teachers. Instead, our intention was to invite viewers/clairvoyants to participate from a pool of candidates who had already demonstrated at least a basic level of proficiency utilizing any method, and who were known to be open to volunteering for projects. Note, this project had no funding, so resources were limited.

Methodology and Project Design Considerations:

Participant Selection

In mid-September, 2012, an email was sent to approximately 30 viewers asking if they would like to participate in an “interesting remote viewing” project. 11 viewers responded. These viewers had previously worked on at least one other project assigned by the researchers, and many had previously been recruited through various remote viewing lists and social sites. The viewers ranged from having over 10 years’ experience and hundreds of remote viewing sessions to a fairly new clairvoyant student having only a few sessions completed. Most of the 11 viewers had little experience with human targets.

First Tasking

As per remote viewing protocol, a neutrally worded email was sent out with only a randomly generated target number that had no significance to the target. No information was provided other than: “The Target number is 91752183. Describe the target”. The viewers were not told this was a human target.

However, the first three viewers’ sessions only described locations and had no mention of people whatsoever. The researchers felt a need to revise their “tasking” (the wording used to assign the task to the viewers), and turned to experts in the remote viewing community to determine the best way to proceed.

Rewording/Second Tasking

Lyn Buchanan, a recognized expert in the field, has overseen numerous operational projects since retiring from the military as a remote viewer. He advised that it would still be within acceptable research protocols to provide the tasking that the “Target is a Person. Describe the Person”. He explained that while traditional psychic research calls for both the remote viewers, and those assigning the targets, to remain completely blind to the target (the double-blind protocol), in operational projects (defined loosely as a project that has a client who is seeking information to solve a problem or for a real life purpose) viewers are often given tasking that narrows down what it is about a target that needs focusing on.

Such tasking does not diminish the ‘blindness’ the viewers have to the target, given the number of people in the world alive now, and throughout history, and those who exist as no more than a concept (e.g., Superman, Harry Potter, etc.). Lyn did point out that researchers who have not run operational projects may find this approach less valid.

In addition to considering the above advice, the researchers examined a variety of studies from other disciplines that were considered to have high scientific validity. It was found that the level of ‘blindness’ traditionally required in remote viewing research projects far exceeds the level mandated in other fields, even in projects where people’s lives are dependent on the findings.

For example, it was noted that in most pharmaceutical studies that are considered as meeting double blind standards, the subjects are not at all blind to the nature of the study itself. In fact, they are almost always told which drug is being studied and for what purpose. Rather, what they are “blind” to is the specific treatment/option applied. They won’t be told whether they are receiving the drug itself or the placebo. The same is true for the researchers who are applying the treatment and dealing directly with the studies subjects/participants. Those administering the treatments will quite often know which drug is being studied, who is funding the study, and even what results those funding the study are seeking. The only thing they don’t know is whether the subject is being given the drug, or a placebo.

In light of the above, and given that this project was one that could fall into both the categories of research as well as operational since we were seeking information about a real life question, as opposed merely seeking to test psychic abilities – which was not our main focus, it was decided to change the tasking as Lyn suggested – “The target is a person. Describe the person”.

Another email was sent to the 11 viewers, with the same target number but with the changed tasking, “The target is a person. Describe the person”. The deadline for completing sessions was extended by two weeks. The 3 viewers who had already provided sessions that contained no information about a human or biological subject were asked to repeat their sessions, disregarding whatever information had emerged during their earlier attempt. Each indicated this would not be a problem and complied.

SESSION EVALUATION/SCORING

All sessions were received by October 25. Spanning a two day period, the researchers took approximately 8 hours to evaluate and score the sessions utilizing an analytical method developed by Alexis Poquiz. This method was designed for use in Associative Remote Viewing (ARV) Projects overseen by Marty Rosenblatt and Tom Atwater. Its goal was to automate a modified interpretation of Russell Targ’s ‘Confidence Ranking Scale’, in an attempt to generate more consistent judging scores.

Explanation/Justification for our judging/scoring methods

As researchers, in addition to being trained in Controlled Remote Viewing, Extended Remote Viewing and/or Clairvoyant Reading Techniques, we have had the opportunity to judge and evaluate dozens of ARV sessions on multiple occasions under the guidance and instruction of some of the leading remote viewing researchers and analysts, including Russell Targ and Skip Atwater. Debra Katz has been involved in ARV on a regular basis for the past 4 years.

ARV projects (usually predicting the outcome of future events for the purpose of wagering) and traditional RV research experiments (that are primarily designed for the sole purpose of proving the existence of psychic functioning) tend to closely mirror each other in that a viewer will be tasked with viewing a specific photograph from a pool of potential photographs.

Viewers may be asked to view the photo itself, or to view the “photo site”, meaning they place themselves at the actual location to have a fuller, more visceral experience of what is actually there as opposed to only using their sense of sight to describe the photo itself. (The pros and cons to these approaches lie outside of the scope of this discussion.)

Once sessions are completed, in both ARV and traditional RV experiments, judges determine how closely a viewer’s session actually matches a particular target. In ARV, a photo is paired (associated, hence Associative Remote Viewing), with each of the possible outcomes of a future event. For example, the possible outcomes of a sports game are, (1) that a particular team will win or, (2) lose. Outcome (1) could be associated with the photo of a space shuttle taking off, and outcome (2) could be paired with a photo of a pyramid in Egypt.

The project manager would have selected photos beforehand that are equally compelling but different enough, so those evaluating the sessions will be able to rate which photo has been described by the viewer’s session.

The outcome associated with the photo with the most viewer ‘hits’ is announced as the Prediction. After the event (in this example, the game) has occurred, viewers will be shown ONLY the photo that is connected with the actual outcome. This completes the feedback – psi communication loop. This loop extends from the time the viewer did their session to the time they are shown the actual outcome photo, this is their feedback. The session, judging, the actual game and its outcome, as well as the viewer seeing the feedback all exist within the loop.

SCORING/RATING METHODS

Most relevant here in comparing ARV methodology and the presidential prediction study is the way in which sessions are scored to predict the outcome.

From years of personal experience as a viewer or judge in hundreds of ARV sessions, it’s clear that what happens during the judging/evaluating phase is just as significant and potentially as precarious as what happens during the viewing stage. Any errors that occur during the 3 steps – viewing, judging and declaring the prediction (in addition to the 4th step, providing feedback to viewers) can result in a faulty prediction (resulting in the loss of money for those who actually wager).

Over the years, ARV approaches have been modified and experimented with, including the way judges score the sessions. The following discussion is based mostly on ARV projects that fall under Marty Rosenblatt’s tutelage.

Typically, Russell Targ’s ‘Confidence Ranking Scale’ is used to rate remote viewing sessions. This is an 8 point scale, spanning from 0 to 7 in increasing session / target correspondence. Zero is defined as the session having no correspondence with the target, whereas 7 means 100 percent correspondence with virtually no incorrect information. The system is easy to learn and utilize which makes it the most popular method in ARV. While employing the Targ scale, it was found to be far more effective in terms of judging consistency than not using a scale, however some feel it lacks sensitivity.

Applying the scale has been challenging due to the inherent subjective nature of the Targ Scale. The different scale levels are not precisely defined. They are defined in broad and somewhat subjective terms. For example a level 3 confidence ranking is defined as a “Mixture of correct and incorrect elements, but enough of the former to indicate that the viewer has made contact with the target.” A level 4 confidence ranking is defined as “Good correspondence with several matchable elements intermixed with some incorrect information.” What is the difference between a “mixture of correct and incorrect” and “several matchable elements intermixed with some incorrect information”? One can argue that the descriptions are virtually identical in meaning. This has led to wildly differing judging scores between multiple viewers.

The team felt the need to adopt a more sensitive tool for judging and analysis. Alexis Poquiz is an active participant in Rosenblatt’s ARV groups and has been developing a judging tool, the ‘Dung Beetle System’, that is based on a systematic interpretation of the Targ scale. Poquiz’ interpretation uses numerically defined levels based on percentage values of correct and incorrect matches and the percentage value of unknown matches.

The core concept of this approach involves systematically listing out every single descriptor and sketch from the viewer’s session into a spreadsheet. Each descriptor and sketch is then rated by a judge as “Yes”, “No” or “Maybe”, similar to Lyn Buchanan’s method of scoring. The judges provide ratings for all the descriptors and sketches for both potential targets. Once every descriptor and sketch has been rated, the Dung Beetle System automatically compares the percentage values between the two targets and selects the target with higher “Yes” percentages, lower “No” percentages and lower “Maybe” percentages.

What does this have to do with our project that is not about Associative Remote Viewing or testing psychics but rather predicting the outcome of a presidential election?

This project is similar to an ARV project. Our goal was to accurately predict the outcome of the presidential election, and to ANNOUNCE this prior to the election, AND there were two outcomes (in that there were really only two viable candidates that could win due to our wonderful electoral system, barring any unusual circumstances). For example, if one of the candidates had died, another candidate could have emerged that could have very much skewed our results. It wouldn’t have hurt the viewers’ sessions, as they would have likely viewed the correct candidate no matter who was running. However, it would have resulted in the judges comparing the sessions to the wrong possible outcome, in this case one outcome being Obama as winner, the other, Romney as winner.

Given that this project was very similar in structure to ARV projects, utilizing Poquiz’ Dung Beetle System of scoring along with his personal services was a natural and appropriate choice.

JUDGING METHODOLOGY

Raters/Judges – While traditional RV research would not employ judges who were also serving as project managers (typically a study will have stronger validity if all roles are separated), due to time constraints a decision was made to do the judging/rating ourselves. We already had a certain level of experience with judging and utilizing Poquiz’ analytic system and did not have time to train other judges. Perhaps most importantly, we wanted to experience what it like to judge viewers’ sessions of a human target and therefore were willing to accept that in assigning ourselves the dual roles of project managers and judges that this would weaken the project’s legitimacy due to possible telepathic overlay.

In other words, even though we were blind to the target in terms of outcome, it’s possible a viewer could have tuned into our own preferences for a particular candidate (as much as we tried to be or desired to be unbiased). It’s also possible, and seemingly even more likely, that the viewers could have been subconsciously tuning into their own candidate preference, as shall be discussed later.

Procedures

Organizing our score sheets – After all sessions were received, the judges went through each one together and transcribed each descriptor and sketch into a spreadsheet.

Each sheet was originally formatted with five columns, headed as follows: “Romney”, “Obama”, “Both”, “Neither”, and lastly a column for “Unknown”. These columns were modified by Poquiz to, “Y” for yes, “N” for no and “Q” for maybe.

Judging by Consensus – there are two options to judging the sessions; each session could be judged independently by each judge and the scores compared (as is typically done in traditional RV studies that are testing the presence of psi functioning) or the judges could work through the sessions together as is sometimes done in ARV projects. After scanning the sessions and determining that most did not clearly point to either option (candidate) (with the exception of one viewer that named Obama as their very first impression), we realized they included a number of subjective impressions, that we may not agree on, or understand, or could be viewed from a position of bias.

Therefore, we decided to score the sessions by consensus, meaning that nothing would be written on the score sheet until both judges agreed. Any disagreements would have to be worked out through discussion and research of the subject matter until an agreement was reached.

Potential Biases – Starting out, we were well aware that our own personal biases regarding either candidate could very much influence how we judged a session. While Debra intended to vote for Obama, she did see some positive qualities in Romney and had no clear picture of who would win. Michelle declared herself to be completely neutral and wasn’t planning on voting, although her family is traditionally Republican and not at all pro Obama, so we felt this balanced us out. As Michelle and Debra have known each other for over 30 years, having been childhood friends and working together on various projects prior to this, we agreed that if either one noticed any biases emerging within one another’s choices that we would point this out and make no determination until we both came to the same conclusion.

Challenges – It only took a few minutes rating one session to realize there were some inherent challenges to the task at hand that we had either not anticipated or severally underestimated.

CHALLENGES TO VIEWING AND RATING A HUMAN SUBJECT AS A RV TARGET

Obama and Romney more alike than you’d think

In retrospect, both judges began this project with the naive assumption the two candidates were quite different. (Had we assumed any differently, we may not have embarked on it at all.) After all, thinking in terms of larger concepts, one candidate was African American, was the incumbent and a democrat with very strong democratic ideals. The other is a very wealthy, Caucasian, republican from a devout Mormon background. However, in getting to the descriptors of each session, we were surprised, and somewhat dismayed, to find many applied to both men.

Some of the descriptors that could apply to both men included words such as:
  • male
  • middle aged
  • expensive house
  • Wears suits to work
  • public figure
  • accomplished speaker
  • fixated on money
  • has a staff
  • seems suburban
  • residential area
  • fit
  • smartly dressed
  • muscular
  • tall
  • dark hair
  • contemplative
  • health good
  • girly like hands
  • approaches work like duty
  • people pay attention to him
  • hair is short
  • enjoys reading
  • enjoys learning
  • went to expensive schools
  • is smart
  • sometimes feels lonely and sad
  • father
  • on hot seat, like in court
  • being grilled by a panel or like on a panel

RACE/ETHNICITY/RELIGION

There were even some words, such as those pertaining to race/coloring/religion that were thought to be easy to assign to one or another but weren’t given that Obama’s mother is Caucasian and he is lighter complexion than many people of African American descent. Words and phrases that were surprisingly challenging were:
  • Appears Caucasian – like
  • golden tan person
  • light skin
  • Wavy hair
  • The thought, Jesus Christ popped in my head.

Some words are highly subjective and unclear

For the first few hours of judging we debated every word, painstakingly searching the internet for information on the candidates or to understand definitions of words. Finally we realized, that if we had this much discussion over a word or a phase then we should let the need for a prolonged debate act as a signal that this word or phrase should be assigned a “Q” and placed in the Question or Unknown category. This helped us move along somewhat faster.

MEDIA INFLUENCE – Facts are not always facts, even when coming from the news (especially when coming from the news!)

Some of difficulty we experienced seemed to do with our personal pre-conceived notions that were based on which media outlets we had been exposed to. Notions that at first seemed to be based on fact, but with deeper introspection were found to have come from what could only be considered a biased news source. For example, in watching CNN (which Debra had a tendency to do) one heard a nonstop barrage of negative commentary about Romney who was thought to not care about or have a high regard for a large portion of the American population due to comments he had made while being secretly taped at a fundraising event amongst wealthy contributors. On the other hand, on FOX TV, (which was Michelle’s family choice for news) Romney’s acts of charity were often emphasized.

PUBLIC VS. PRIVATE IMAGE
Another challenge encountered was that many people possess personality traits that are seemingly contradictory. One might think that anyone running for office would have to be an extrovert who loves attention. However, many people in public positions are actually introverts who don’t particularly enjoy being in the spotlight but have learned to cope with this aspect of their work.

Given we as judges aren’t inside the candidates heads or homes, (while the viewers actually may be – clairvoyant readers can often see and will mention both the inner and outer life of the subject), all we can go by as JUDGES is what we see or hear of the candidates in the media.
Some of the words and phrases that were debated and the judges had differing opinions based on the TV networks they had watched included:
  • gives money away
  • generous
  • caring
  • loving
  • kind
  • appears to be a thinker

Comparative/Relative words – Some words based on perceptions may have unique meaning to the viewer based on that viewer’s point of view, own race, gender, physical constitution and life experience. This is why is can be helpful to know the viewer.

One of our viewers stated the target “appears Caucasian-like”. She is African American. It’s possible her frame of reference/world view/language could have a different meaning than a viewer who is Caucasian and grew up/lives in a primarily white neighborhood. Furthermore, it was known she was a strong Obama supporter from comments she had made in the past, so when she said in her session, “I like this man”, this was assigned a “Yes” as a match for Obama. Not knowing up front what the viewers’ preferences were (for obvious reasons, they weren’t polled until AFTER the election) meant that if anyone else had a similar comment, we’d have to leave that as a “Q” for Question/Unknown.

Viewers will tend to describe others in relation to themselves. For example, a 90 year old is going to probably describe a 40 year old as “young”, while a 20 year old is going to say the 40 year old is, well, old. A viewer that weighs 260 pounds is probably not going to refer someone who weighs 160 pounds as heavy, whereas someone who barely weighs 105 pounds may consider that 160 person obese. A 5 foot women is going to describe a 5’ 8’’ man as tall, whereas the 5’ 8’’ man may not consider another man, who is the same height or just a bit taller, to be tall. Romney and Obama are both over 6 feet tall, with an inch or two difference.

Problematic words in the sessions included:
  • Short
  • Tall
  • Thin
  • Large
  • Old
  • young
  • muscular
Then there was factual information that became a source of contention for the judges. At least 30 minutes was spent on the two comments:
  • has 7 brothers
  • they all do similar work
Neither judge knew at the time Obama had more than one half-brother. However, he does have 7 half-brothers in addition to a half-sister. Even after finding 3 sources on line, Michelle was still suspicious of the credibility of internet sources, she couldn’t believe a fact like this would be kept out of the mainstream media, when one brother in particular had received quite a bit of attention. It wasn’t until a list of the names was found that she agreed to score the “has 7 brothers” a “Y”.

Regarding the statement “all do similar work”, it appeared that at least 3 did the same work when they were younger. For the others that information was just unknown. Debra would have liked to have given this a “Yes” knowing that viewers will often get close to the correct answer but be off slightly. However, Michelle insisted compliance to the rules and therefore this went to the Unknown (Q) category.

Descriptors that just could not be verified either way:
  • perspires a lot
  • sometimes feels lonely or sad
  • sometimes wears a tennis band on head
  • man teaching girl to tap dance
  • lives west of a museum (‘Y’ for Obama, ‘Q’ for Romney)

Lack of Sketches

While remote viewing experts will disagree about many aspects of remote viewing, none would disagree with the statement that sketches are an essential part of any session. Renowned remote viewers such as Ingo Swann, Joe McMoneagle, Paul Smith, Lyn Buchanan, were or are all artists. In fact, Dr. Courtney Brown of the Far Sight Institute, an author, professor at Emory University, and one of the most eminent remote viewing researchers, particularly in the area of utilizing remote viewing to predict future outcomes, has recently come to the conclusion that sketches/drawings may be the most essential part of a session. This has led him to enroll in several arts classes himself, and he has suggested his students do the same.

In fact, most remote viewing projects that are published contain far more examples of viewer’s sketches than this one.  Some feedback from a fellow remote viewing researcher to the first draft of this paper was “Where are the sketches? I haven’t read it over yet but your paper should include more viewers’ sketches”. The response: “There weren’t any more than this”.

Out of the 11 sessions only three contained a sketch of a face. One was not detailed enough to show a resemblance to either candidate. Another detailed set of sketches resembled a religious figure. Given Romney had been a bishop for several years, this was scored as a “yes” for him. It is not included here so as not to emphasize it, as it doesn’t match the actual outcome (President Obama). It was done by the only professional artist in our group, whose entire session pointed to Romney, as did his personal preference.

The final sketch was submitted by Viewer 7. At first glance, it appeared to both judges as a close match to Obama. However, upon learning the nature of this target, Viewer 7 felt at first it resembled Romney but then changed their mind! The photo and sketch were submitted to numerous people, not involved in the project, and as their reactions were mixed also, it was judged as Unknown.
Sketch by Viewer 7 – Session pointed to Romney but there was a high number of “Q”s.
We feel the reason the viewers didn’t sketch during their sessions (as they are trained to do), is because they were already aware this was a human target, and most of them would usually draw a human target either as a stick figure or a little more fleshed out, but not usually with facial details so it didn’t occur to most to provide these, if they even had the ability to. Viewer 7’s sketch is more detailed than the average remote viewer provides when it comes to human faces. Their session also focused in minute detail on every aspect of the subject’s physical health and makeup, more so than any other viewer. Unfortunately, many of these details fell in the Unknown category.

Viewers could be trained to sketch more facial details, or viewers that are already artists and adept at portraits could be utilized in studies involving human subjects. This would just require more resources and present more challenges for the researchers who have to keep the viewers blind to the subject matter.

METHOD OF ANALYSIS
Once the scoring had been completed, the results were sent to Alexis Poquiz who created two spreadsheets for each viewer that included the list of descriptors and sketches along with the ratings the judges had given when compared to each candidate. Poquiz calculated the percent that matched (Correct), did not match (Wrong) and that were Unknown (Q) for both candidates and listed them in two tables which show which viewer’s session pointed to which candidate. See Tables 1(A) and 1(B).

TABLE 1 (A) – Calculation of scores for all viewers’ sessions compared to what can be known of candidate Mitt Romney by judges.
Target Romney
Remote Viewer
Y
N
Q
Correct %
Wrong %
Q %
Romney
Viewer 01
11
4
9
73%
27%
38%
Romney
Viewer 02
3
10
4
23%
77%
24%
Romney
Viewer 03
17
0
0
100%
0%
0%
Romney
Viewer 04
4
4
1
50%
50%
11%
Romney
Viewer 05
20
10
12
67%
33%
29%
Romney
Viewer 06
6
2
3
75%
25%
27%
Romney
Viewer 07
16
9
28
64%
36%
53%
Romney
Viewer 08
8
11
14
42%
58%
42%
Romney
Viewer 09
7
6
5
54%
46%
28%
Romney
Viewer 10
48
9
19
84%
16%
25%
Romney
Viewer 11
5
16
2
24%
76%
9%
Romney Averages
Group
13.18
7.27
8.82
60%
40%
26%
TABLE 1 (B) – Calculation of scores for all viewers’ sessions compared to what can be known of candidate Barak Obama by judges.
Target Obama
Remote Viewer
Y
N
Q
Correct %
Wrong %
Q %
Obama
Viewer 01
9
7
8
56%
44%
33%
Obama
Viewer 02
11
3
3
79%
21%
18%
Obama
Viewer 03
1
1
0
50%
50%
0%
Obama
Viewer 04
6
2
1
75%
25%
11%
Obama
Viewer 05
25
5
12
83%
17%
29%
Obama
Viewer 06
7
2
2
78%
22%
18%
Obama
Viewer 07
8
13
32
38%
62%
60%
Obama
Viewer 08
11
11
11
50%
50%
33%
Obama
Viewer 09
10
4
4
71%
29%
22%
Obama
Viewer 10
48
8
19
86%
14%
25%
Obama
Viewer 11
8
14
1
36%
64%
4%
Obama Averages
Group
13.09
6.36
8.45
64%
36%
23%
RESULTS – WE HAVE A PREDICTION!
The initial judging took hours and after completing it was not clear what our predication should be. However, use of the Dung Beetle methodology guided us to a decisive conclusion. It was clear that the right decision had been made in going with Alexis’ more sensitive method of judging as opposed to straight use of the Targ scale itself, which in other instances is more than adequate, and allows a judge to come up with a score within a few minutes of receiving a viewer’s session (at least a short session).

If we had only used one viewer, Patsy Posey, who named Obama, it would have been a different story – a discussion of whether it is better to use one viewer or multiple viewers is outside the scope of this paper, but one that is worth consideration.

Table 2, below, shows the predictions from each viewer. Note the “Lower Q %” column that shows which of the targets has a lower percentage of unknowns. The assumption, is that the fewer unknowns for a particular target, the more indicative that the session is leaning towards that target.

From the first column in Table 2 it can been seen that out of 11 viewers, 8 had a stronger match for Obama, with 3 matches for Romney.

The ‘Lower Q%’ score, (fewer unknowns indicating the session is leaning towards that target), yielded an overall group prediction for Obama, changing one vote from Romney to Obama, changing another vote from Romney to a tie, and changing 3 of the votes from Obama to a tie, with one vote for Romney remaining the same.

TABLE 2 – VIEWER PREDICTIONS BASED ON HIGHEST PERCENTAGE CORRECT AND LOWER Q % SCORES
PREDICTIONS
Higher Correct %
Lower Q%
Viewer 1
Romney
Obama
Viewer 2
Obama
Obama
Viewer 3
Romney
Tie
Viewer 4
Obama
Tie
Viewer 5
Obama
Tie
Viewer 6
Obama
Obama
Viewer 7
Romney
Romney
Viewer 8
Obama
Obama
Viewer 9
Obama
Obama
Viewer 10
Obama
Tie
Viewer 11
Obama
Obama

VIEWER PREFERENCE COMPARISON – A CASE OF UNCONSCIOUS DESIRE?

In a perfect world, (and a perfect research project) viewers and judges wouldn’t have even known the election was happening, would have not seen news reports regarding the candidates, had no preference to the candidates themselves or knew anyone with preferences. Since we don’t live in that world, one of the factors we wanted to consider was whether a viewer’s preference for a candidate may have had a correlation to their session.

Even if there was 100 percent correspondence, this would not PROVE that unconscious preference had played a role. Rather, it would suggest the possibility of this more so than if there was little correspondence.

FINAL STEP OF METHODOLOGY

Feedback to Viewers – AFTER the election, viewers were informed that they had been tasked with viewing the candidate who was elected in November 2012, Barack Obama.

One week after feedback, viewers were surveyed for their candidate preference; who they voted for or whom they had preferred to win.

Table 3 shows the viewer’s preference compared to viewer’s judged predication.

From this table, 7 out of 11 viewers indicated a preference towards a candidate, even though some of these did not vote for a variety of reasons. (At least one was a citizen of another country.) 2 viewers did not respond to repeated inquiries regarding their preference, and 2 indicated they had no preference.

Out of the 7 that responded, ALL seven voiced a preference for the candidate that their session pointed to

Whilst it cannot be stated with certainty that their preference did have an impact on their session, this possibility has to be given consideration. To put this in terms of the results and original research question: Can viewers actually predict the outcome of a presidential election, it cannot be said with certainty that the viewers were strictly viewing the winning candidate as they may have been viewing their preference, which in 6 of the 7 cases noted here, just so happened to turn out to be the winning candidate.

Future research might explore this question in more depth, given that most emphasis in RV research has been placed on the possibility and prevention of viewers inadvertently accessing the researchers’ knowledge base, rather than the actual target/outcome (hence the need for double blind protocols). Little, if any, research has addressed the viewer’s own subconscious preferences for a particular outcome, until this present study.

TABLE 3 – Viewer Preference and Prediction Comparisons
VIEWER

CANDIDATE PREFERENCE

(Self-reported)

SESSION PREDICTION

(Higher Correct %)

Lower Q %

Viewer 01
None
Romney
Obama
Viewer 02
Obama
Obama
Obama
Viewer 03
Romney
Romney
Tie
Viewer 04
Obama
Obama
Tie
Viewer 05
Obama
Obama
Tie
Viewer 06
Unknown
Obama
Obama
Viewer 07
None
Romney
Romney
Viewer 08
Obama
Obama
Obama
Viewer 09
Obama
Obama
Obama
Viewer 10
Obama
Obama
Tie
Viewer 11
Unknown
Obama
Obama
CONCLUSION

Let’s return to our original research questions:
  1. How strongly will the viewer’s candidate preference effect their session even when blind to the target/nature of the study?
As noted above in Table 2, 8 of the 11 viewers’ sessions had stronger hits for Obama, indicating he would be the winning candidate. As the majority of viewers’ preference was Obama caution is required; the possibility that viewers viewed their own preference cannot be ruled out (despite the fact they were blind to the target).
  1. How will a project involving a human target differ from those utilizing objects and locations?
Human targets offer a number of challenges for judges as there are aspects of people that cannot be known or verified, are subjective, are conceptual, or paradoxical. Viewers AND judges rating sessions tend to evaluate humans in relation to themselves. They actually do this with targets involving locations and objects but the differences are more pronounced. For example, the Taj Mahal is going to be larger than any viewer. A windmill is going to be very active and energetic. However, when a viewer says, “the man is tall”, or “the man is active and energetic”, judges don’t necessarily know what the viewer means by tall or active/energetic.
  1. Is the use of human targets in remote viewing related research projects or applied precognition projects involving binary outcomes, something that researchers or project managers may want to consider in the future?
We would have to say that in terms of a project or study utilizing a binary protocol, in which two photos or photo sites are viewed and then rated by judges, having one photo with humans and one without would not be problematic and possibly desirable given the strong differences between animate and inanimate, non-organic targets such as a building or manmade, stationary objects. However, based on the challenges encountered in this project, it is highly recommended not to have human subjects in BOTH photos/photo sites.

Human subjects don’t make the best targets in research related projects given the difficulties related to judging/rating sessions, and probably should only be used in projects where information about humans is required for a specific purpose, such as when attempting to solve a real life problem involving humans, as in a criminal case. Human subjects should probably not be utilized as the main target in projects attempting to prove the existence of psychic functioning, unless as noted above, they are selected as one option within a binary protocol.
  1. Which method/system of rating/judging sessions is the most helpful when evaluating and analyzing remote viewing targets that focus on human subjects?
The Targ scale could not be easily applied to the sessions to produce a prediction, whereas the Dung Beetle system could. The judges are confident in asserting that this relatively new system is a superior tool for a project such as this.

It should be noted, however, that the Dung Beetle system is more laborious and time intensive. It filters sessions down to singular perceptions or very simple phrases, which means context can be lost in the process. Sessions should be on hand for review, even when all descriptors have been inputted into the spreadsheet.

Click here to visit my website to view video blogs, book your clairvoyant reading or enroll in a class

REFERENCES

1. Beischel, J. and Schwartz, G. Anomalous information reception by research mediums demonstrated using a novel triple-blind protocol. Explore: The Journal of Science & Healing: January/February 2007,Vol.3, No.1

2. Brown, C. Remote Viewing the Future with a Tasking Temporal Outbounder. Journal of Scientific Exploration, Vol. 26, No. 1, pp. 81-110, 2012

3. Buchanan, L. The Seventh Sense: The Secrets of Remote Viewing as Told by a “Psychic Spy” for the U.S. Military. Gallery Books, 2003.

4. Neppe, V. Limitations of the Double- Blind Pharmaceutical Study. Telicom, the Journal of the International Society for Philosophical Enquiry.

5. Poquiz, A. Dung Beetle System provided by its creator.

6. Smith, Paul. Reading the Enemy’s Mind: Inside Stargate: America’s Psychic Espionage Program, 2005.

7. Targ, R., Katra, J., Brown, D., Weigand, W. Viewing the Future: A Pilot Study with an Error-Detecting Protocol. Journal of Scientific Exploration, Vol. 9, No. 3, pp, 367 to 380, 1995.

Special thanks to remote viewing instructors and mentors: Marty Rosenblatt, Lyn Buchanan, Lori Williams, Teresa Finch, Coleen Marenich, Courtney Brown, Mike & Susan Van Atta and all our remote viewing peers along with those at the International Remote Viewing Association.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s