And, the net result of that will be that the lecture piece will no longer be competitive. And, so, the real question won’t be about lectures. It will be about how you take those online content pieces and combine it with study groups, labs, discussion sessions, to deal with the kind of motivation, the supports, that students need.
Everyday there is more things to learn and share, so I need to quickly get through these MediaX notes.
Paulo Blikstein talked about the crisis of students who don’t care about science in math. By first grade 70% of students have no interest in science. By the 7th grade, the sample of students who self report shrinks to less than 10%. This is a leading sign of what our economy will be like in the coming years.
He explains that our schools are not places that support science and math learning. If we want to teach swimming, we build a swimming pool. For science education, we are not creating an environment that fits the learning process. We don’t have the swimming pools for science.
Stanford is creating a program that transforms one room in a school and fits it with the necessary tools to engage students through making. This is the experience they need to feel like they want to learn science. Beyond learning who Leonardo Divinci is, they do what he did using 3d printers and circuits.
The program is called the Fablab and its focus is creating an environment where students can apply what they learn to their own lives. These programs are in countries around the world and not limited to affluent communities. That being said, the percent of students who can benefit from these opportunities are much smaller than the number of students in school.
Shortening the geographical divide
Renate Fruchter talked about improving global teamwork. The question is: How can we capitalize on the core competency in a global corporate structure to communicate beyond space and time. The group PBL Labs is using a mixed media reality platform to have real time 3d collaboration between digital space.
The program recognizes Face-to-face is the best communication, but is not always possible. As individuals, we are wired for feedback, so we need to create systems of communication that allow us to communicate beyond face-to-face.
There were six statements that this model assumes are needed: I know where I am, I know where you are, we know where we are, we know where we want to go, we know where we can go, we move.
Brain patterns and the mind
This talk is a bit scary. Brain scans can be used to identify if you recognize something or whether it is the first time for you to be exposed to stimulation. Technology use on neuro-cognitive function can scan the brain to see what parts of your cortex fire. Using brain imaging technology and machine learning algorithms, the brain can be analyzed for real world interaction.
The ability to monitor memory does not require human beings to know exactly what touch points in the brain are being active. Instead, machine learning patterns can be used to look for similarities in the processes associated with learning. When a process of learning occurs, it is different than a process of status. This means changes in the brains ability to see something new and recognize something old can be identified.
The decoding can be used to see whether a person recognizes a face. These reports can be near perfect. You can apply this for forensic implications, use it for commercial purposes, and understand the state of being a person believes they are in.
The brain can also be understood to see if people think they are in a certain place. Using video games, people are stimulated to perceive themselves in a game. This means, they are mentally projecting themselves in a different ‘place’. The fact that the information about a person’s believed ‘location’ can be tracked has huge implications.
Emotional wellbeing, psychological status, false systems of logic, and insanity are literally measurable. If a person feels alone or constructs a reality in which they are in a ‘different’ place, neuro imaging can visualize this.
The patterns still have improvement opportunities, but this raises many questions of ethics and privacy. When you can show up in court to have a brain scan which reports whether or not you recognize a witness, we will be in a very different state of authority and operation.
Schools are poor
Roy Pea talked about the educational system. Schools are poor. Graduation rates are horrible. The Bill and Melinda gates foundation has a 17 year goal of having a 70% high school graduation rate in the United States. This is optimistic. Right now, 7/10 people do not graduate from college. Thats a 30% rate of elementary school students who will actually make it into college. Thats 30% of people who have an opportunity to develop a language to understand and express the world in a shared academic construct.
Roy talks about the points we can design differently. To start, students in the K-12 school system spent 1 million minutes in school. These classes with long seating time and lecture based teaching with paper textbooks are not effective. We get very little data from students apart from midterms and final grades. People in every class are creating data, but the amount of feedback that a teacher gets to direct a students learning is minimal. This needs to change.
The department of education published a report in 2012 declaring a technology plan for education. The vision is very different from what we have now. The plan states learning can take advantage of an always on world. The always on can mean always creating (data) and always learning (consuming). Using mobile powered technology, theres opportunities for a single teacher to transmit to all students.
Student centered methods of learning need to be taught. Not just to students, but to teachers, administrators, parents, and communities. We can have student centered learning through the tools we have available to us today. We need to develop systems that support these tools and have individuals who are trained in mentoring and coaching people toward authoritative goals. The goal is life long learning and we can have this through the available learning tools.
We can have learning related to school in and out of the classroom. As American’s, our school system has one of the smallest times in the classroom. We can extend the learning time, even with this. The national plan can have changes applied to the school structure. It can give teachers better tools to apply visual learning opportunities.
Right now, math and science levels drop over the summer for low income students. New services can increase student learning time out of the class through games. The plan to connect America through broadband internet can be a national plan to improve the opportunities to access learning materials. This would transform American education through learning powered technology.
Shortlist of points: education analytics will be important, US department of education declared adoption of technology, personalized adaptive learning pathways using data trails, utilization of mobile devices, out of the classroom learning opportunities, personalized learning goals…personalization is the key.
Grand challenge #1: Design and validate an integrate system that provides realtime access to learning experiences tuned to the levels of difficulty and assistance that optimize learning for all learners, and that incorporates self-improving features that enable it to become increasingly effective through interaction learning.
I attended, MediaX
Held at Stanford university to share emerging academic research. The notes from the MediaX talks can be found here. I suggest copying the content and moving them to your favorite editor.
The openning was given by Claude Steele, a scholar of Stereotype Threat. Claude’s openning emphasized the changes in the research that come about from looking at connections between concentrations of study. He gave an example of psychology since the implementation of the MRI brain scan. Prior to image scanning, psycology was a measurement of behavior (subjective and inconsistent). MRI imaging transformed the industry by creating new opportunities to measure (quantitatively and objectively) and establishing a new language for explaining a powerful science.
Interpersonal interaction coding.
The first talk by Mark Schar analyzed productivity, creativity, and group dynamics between “divergers” and “convergers”. Divergers were described as people would questioned one another to find new answers. Convergers were oppositely seen as people who agreed with the rest of a group.
The study looked at how people collaborated, by calculating the pace at which questions were asked and new ideas were presented. The study clearly showed a distinction between two seperate groups (composed on only one type). The pace of ideas were expressed by diveregers at a rapid pace in the beginning and a bit slower in the middle. *The convergers had a slow start *to discuss ideas, then near the middle exchanged ideas and slowed down again at the end. The important point is to have both groups together when constructing teams.
The goal of the study was to understand and code interaction-dynamics between individuals working together in groups. The result was understanding that regardless of the type of people, the most important thing was to insure that no “blocks” were created during discussion. As long as no blocks existed, the group continued to look for alternaltives in approaching the situation.
Greg Kress studied how to predict long term team performance based on personalities. Study showed that all 17 factors studyed are important, but one specific point was significantly correlated with innovation and creativity. People who were extraverted in expressing their feelings created a better team dynamic and design.
Johnathan Edelman looked at how to influence media and how media influences people. ()Im not exactly sure how this was translated to the evidence below) Edelman studied two groups interaction to see how their behavior would be in a radical design process. The result showed groups that had extraverted physically expressive emotionally involved individuals were identified as the ones who would contibute more to a group. These individuals were seen to gesture more than others.
Ramesh Johari, professor in management science and engineers looked at how markets can be engineered. He looked at the variables important on influencing exchange in a online market place.
When looking at a market we ask ‘who can we trade with’, ‘wou are our competitors’ and ‘how much should we charge’. These questions are difficult to answer because of the limited amount of information provided to us in determining the answer. The rise of online market platforms change our ability to make these decisions.
Two important points of platforms are: Fine-grained matching of market participants and fine-grained collection of information about matches. In otherwords, transparency and centralization.
Market designers, like Uber, centralize the marketplace around their control. You request a ride, but don’t have a choice between who to select. Once you make a request, Uber opaquely decides who will serve you.
Oppositely, Ebay or Odesk are very decentralized and allow the use all the access to make the best decision. Still, because there are so many options, the decision is influenced by factors of search, rating, history and filtering.
Both decentralization and centralization, opacity and transparency have its benefits. Decentralization is powerful when the platform does not know what the best match for the user will be. Centralization is best in the opposite situation when the platform knows what is best for the user.
Opaque markets can also be benefitial in crowded markets. Tradiationally economics states that having more choice allows for the market to make the best decision. Instead, in a “web 3.0” world where there is too much information, it is not always obvious to know what to choose. (i.e. taobao). Having too much information and not enough can both be negative.
The big questions are how do you know how to price these market items and how much information do you need to release. A project describing this issue is looking at the pricing of mobile apps and the variables that influence the purchase of these products. When the question regarding the best marketing strategy, visibility is the most important variable in competitive atmospheres.
Academic publication meta-coding
John Willinsky and Alex Garnet talked technically about providing an effective markup structure to existing PDF journals. The focus was kill the PDA and establish a constant method for automaking the parsing and rendering of scholarly materials. The benefit of a good markup is that your document becomes your metadata. You dont need a well coded abstract, but instead the document itself contains all the content availible for understanding the contents.
Reasons: Markup is expensive when someone needs to manually markup the document. Especially for small publishers, it can be very expensive to manually tag existing documents. As a result, having a system to systematically parsing exissing content is valuable.
PDF doesnt have well-structured text mining and indexing. It also goes against current patterns of data storage, in that it does not have a way to render in different formats on mobile platforms. This prevents dyanmic content from being loaded into the documents. (Imagine having a up to date graph in your content while reading a journal)
(to be continued…That was only 1/3rd of the presentations.)