Field – Term 2 Formative Assessment

Today I gave a presentation which was a reflection on my time on the module Virtual and Real Internet of Things and a project proposal. I felt the presentation went well. I spoke about my experiences and ideas for about 10 to 15 minutes, this was then followed by a discussion between Jon Counsel, Alexandros and I about where to start with my idea and what direction to pursue.

We essentially broke the project down into two separate component parts. The Scanning element or part and the actual drawing machine part. They raised the concern (I had already thought about) that the combing of the scanner, scanning software, converting 3-D file to 2-D image, then getting the drawing machine to draw the image. I realise this is going to be a complicated project and so it would be best to break it down into different sections. It will require some steep learning curves with programming languages, the world of 3-D scanning software, data, point-cloud sampling and probably some mathematics in there too.

Alexandros recommended I use the open source coding platform Processing which is base on Java – it works very well with Arduino and RPi. From the Processing welcome page: “Processing is a programming language, development environment, and online community. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. Initially created to serve as a software sketchbook and to teach computer programming fundamentals within a visual context, Processing evolved into a development tool for professionals. Today, there are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning, prototyping, and production.”

I will continue to learn how to use Python and JavaScript as I feel these will come in handy (besides I’m am already learning them).

Processing also has libraries for the Xbox Kinect module which I was considering using for my drawing machine . With this scanner I could produce a point map or triangulation map of my subject, this could then be simplified and then used to draw an abstracted image of the subject scanned. Could this move it into a fine art ‘realm’? It was something similar that Jon Counsel brought up. He said something along the lines of that given the nature of what I was trying to do that perhaps an accurate representation might be very difficult to produce in terms of recognisability. Perhaps a better idea (and I think more interesting idea) would be to produce simple point clouds (data point maps) of the scanner subject and then use the data and drawing machine for a more visually artistic process or outcome. The abstracted and modified data may end up being more interesting than direct representation.

Things to look at:

  • danielgm.net
  • Kinect and similar open alternatives
  • Gross abstraction of the collected data – how and why? What is a good/pleasing use of this?
  • Processing Tutorials
  • Point clouds and triangulated data – Pepakura?
  • Application of Blender
  • pointclouds.com – library’s
  • Photosyth.com – A Microsoft programme?
  • Richard Gregory – neuro scientist – mapping neurones and brain – conciousness?
  • 3 ways of seeing
  • Illusion of seeing – could I create some interesting effect with the machine?
  • How does my drawing machine relate to the Internet of Things?
  • Visualising big data?
Advertisements


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s