Wednesday 29 March 2017

2017 Publications upto end of March 2017

  1. Ajam, H. and Opoku Agyeman, M. (2017) A study of recent contributions on performance and simulation techniques for accelerator devices. In: International Conference on Electrical and Electronics Engineering. Turkey: IEEE ICEEE. (Accepted)
  2. Al Barrak, A.Al-Sherbaz, A.Kanakis, T. and Crockett, R. G. M. (2017) Utilisation of multipath phenomenon to improve the performance of BCH and RS codes. In: 8th Computer Science & Electronic Engineering Conference. New York: IEEE. 978-1-5090-2050-8. pp. 6-11.
  3. Al-Mahmood, A. and Opoku Agyeman, M. (2017) A study of FPGA-based System-on-Chip designs for real-time industrial application. International Journal of Computer Applications. 0975-8887. (Accepted)
  4. Al-Waisi, Z. and Opoku Agyeman, M. (2017) An overview of on-chip cache coherence protocols. In: IEEE IntelliSys Conference 2017 Proceedings. London: IEEE. (Accepted)
  5. Al-Zoiny, S. and Al-Sherbaz, A. (2017) Connected Health Services in Smart Technologies. UK: Kobo Publisher. 1230001603163.
  6. Dawood, A.Turner, S. J. and Perepa, P. (2017) Developing a new automated model to classify combined and basic gestures from complex head motion in real time by using All-vs-All HMM. Journal of Emerging Technologies and Innovative Research.4(3), pp. 156-165. 2349-5162.
  7. Manh Phan Hung, D.Manyam Seshadri Naidu, S. and Opoku Agyeman, M. (2017) Architectures for cloud-based HPC in data centers. In: IEEE International Conference on Big Data Analysis. Beijing, China: IEEE. (Accepted)
  8. Marjan, Z.Abdulhussein, G. and Opoku Agyeman, M. (2017) A study of wireless Networks-on-Chip for emerging technologies.International Journal of Computer Systems. 4(3) 2394-1065. (Accepted)
  9. Opoku Agyeman, M.Vien, Q.-T.Hill, G.Turner, S. J. and Mak, T. (2017) An efficient channel model for evaluating Wireless NoC architectures. In: 2016 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW). Online: IEEE. 978-1-5090-4844-1. pp. 85-90.
  10. Opoku Agyeman, M. and Zong, W. (2017) An efficient 2D router architecture for extending the performance of inhomogeneous 3D NoC-based multi-core architectures. In: 2016 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW). USA: IEEE . 978-1-5090-4844-1. pp. 79-84.
  11. Saleh Alalaki, M. and Opoku Agyeman, M. (2017) A study of recent contributions on simulation tools for Network-on-Chip (NoC).International Journal of Computer Systems. 4(3) 2394-1065. (Accepted)

All views and opinions are the author's and do not necessarily reflected those of any organisation they are associated with. Twitter: @scottturneruon

Tuesday 28 March 2017

Developing a new automated model to classify combined and basic gestures from complex head motion in real time by using all-vs-all HMM


Dawood, A., Turner, S. J. and Perepa, P. (2017) Developing a new automated model to classify combined and basic gestures from complex head motion in real time by using all-vs-all HMM. Journal of Emerging Technologies and Innovative Research. 4(3), pp. 156-165. 2349-5162
Human head gestures convey a rich message, containing information deliver for peoples as a communication tool. Nodding, shaking are commonly used gestures as non-verbal signals to communicate their intent and emotions. However, the majority of head gestures classification systems focused on head nodding and shaking detection. while they ignored other head gestures which have more expressive emotional signals like rest(up and down), turn, tilt, and tilting. In this paper, we developed a new model to classify all head gestures (rest, turn, tilt, node, shake, and tilting) from complex head motions. The model methodology based on distinguishing basic head movements (rest, turn, and tilt) and combined movements (nodding, shaking, and tilting). The purpose of this system is to detect and label combined and basic head movements in dynamic video. In addition, this phase of this study looking at developing an affective machine uses head movements to extract complex affective states (this work is underway). The system used 3D head rotation angles to classify relevant head gestures in-plan and out-plan of view during user interaction with computer. This system used an open source tracker to detect and track head movements. The Three angels that obtained from the tracker (pitch, yaw, and roll), were analyzed and packed into sequences of observation symbols or cues. Observations formed inputs to an all-vs-all discrete Hidden Markov Model (HMM) classifier. Three classifiers were used for each angle. The classifiers are trained on Boston University dataset, and tested on available mind reading data. The system evaluate on video streams in real time by webcam. The system is fully automatic without incurring any cost of technical methods and doesn’t require any sensitive tools.



References


-->
[1]  F. Althoff, R. Lindl, L. Walchshausl and S. Hoch, "Robust multimodal hand-and head gesture recognition for controlling automotive infotainment systems," VDI BERICHTE, vol. 1919, p. 187, 2005.


[2] E. Murphy-Chutorian and M. M. Trivedi, "Head pose estimation in computer vision: A survey," IEEE transactions on pattern analysis and machine intelligence, vol. 31, no. 4, pp. 607--626, 2009.
https://doi.org/10.1109/TPAMI.2008.106

[3] A. Mignault and A. Chaudhuri, "The many faces of a neutral face: Head tilt and perception of dominance and emotion," Journal of nonverbal behavior, vol. 27, no. 2, pp. 111-132, 2003. [
https://doi.org/10.1023/A:1023914509763

4] V. Blanz and T. Vetter, "A morphable model for the synthesis of 3D faces," Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pp. 187--194, 1999.
https://doi.org/10.1145/311535.311556

[5] R. A. El Kaliouby, Mind-reading machines: automated inference of complex mental states, Cambridge: PhD, thesis, Citeseer, 2005.

[6] J.-G. Wang and E. Sung, "EM enhancement of 3D head pose estimated by point at infinity," Image and Vision Computing, vol. 25, no. 12, pp. 1864--1874, 2007.
https://doi.org/10.1016/j.imavis.2005.12.017

[7] A. Kapoor and R. W. Picard, "A real-time head nod and shake detector," Proceedings of the 2001 workshop on Perceptive user interfaces, pp. 1-5, 2001.
https://doi.org/10.1145/971478.971509

[8] W. Tan and G. Rong, "A real-time head nod and shake detector using HMMs," Expert Systems with Applications, vol. 25, no. 3, pp. 461--466, 2003.
https://doi.org/10.1016/S0957-4174(03)00088-5

[9] J. W. Davis and S. Vaks, "A perceptual user interface for recognizing head gesture acknowledgements," Proceedings of the 2001 workshop on Perceptive user interfaces, pp. 1--7, 2001.
https://doi.org/10.1145/971478.971504

[10] A. Adams, M. Mahmoud, T. Baltrusaitis and P. Robinson, "Decoupling facial expressions and head motions in complex emotions," International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 274--280, 2015.
https://doi.org/10.1109/acii.2015.7344583

[11] L. R. Rabiner, "A tutorial on hidden Markov models and selected applications in speech recognition," Proceedings of the IEEE, vol. 77, no. 2, pp. 257--286, 1989.
https://doi.org/10.1109/5.18626

[12] R. Bakis, "Continuous speech recognition via centisecond acoustic states," The Journal of the Acoustical Society of America, vol. 59, no. S1, pp. S97--S97, 1976.
https://doi.org/10.1121/1.2003011

[13] R. A. El Kaliouby, "Mind-Reading Machines: automated inference of complex mental states," University of Cambridge, Cambridge, United Kingdom, p. 185, 2005.

[14] T. Baltru, P. Robinson, L.-P. Morency and others, "OpenFace: an open source facial behavior analysis toolkit," 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1--10, 2016.

[15] T. Baltrusaitis and C. T. Baltrusaitis, "Automatic facial expression analysis," University of Cambridge, Computer Laboratory, Technical Report, no. UCAM-CL-TR-861, 2014.

[16] R. P. Gaur and K. N. Jariwala, "A survey on methods and models of eye tracking, head pose and gaze estimation," India, 2014.

[17] C. a. Z. Z. Zhang, A survey of recent advances in face detection, Technical report, Microsoft Research, 2010.

[18] M.-H. Yang, D. J. Kriegman and N. Ahuja, "Detecting faces in images: A survey," IEEE Transactions on pattern analysis and machine intelligence, vol. 24, no. 1, pp. 34--58, 2002.
https://doi.org/10.1109/34.982883

[19] S. Li, X. Zou, Y. Hu and Z. Zhang, "Real-time multi-view face detection, tracking, pose estimation, alignment, and recognition," IEEE International Conference on Computer Vision and Pattern Recognition, 2001.

[20] M. J. Jones and P. Viola, "Robust real-time object detection," Workshop on statistical and computational theories of vision, p. 56, 2001.

[21] R. Chellappa, C. L. Wilson and S. Sirohey, "Human and machine recognition of faces: A survey," Proceedings of the IEEE, vol. 83, no. 5, pp. 705--741, 1995.
https://doi.org/10.1109/5.381842

[22] M. La Cascia, S. Sclaroff and V. Athitsos, "Fast, reliable head tracking under varying illumination: An approach based on registration of texture-mapped 3D models," IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 4, pp. 322--336, 2000.
https://doi.org/10.1109/34.845375

[23] J. Xiao, S. Baker, I. Matthews and T. Kanade, "Real-time combined 2D+ 3D active appearance models," CVPR (2), pp. 535--542, 2004.

[24] A. Gee and R. Cipolla, "Determining the gaze of faces in images," Image and Vision Computing, vol. 12, no. 10, pp. 639--647, 1994
https://doi.org/10.1016/0262-8856(94)90039-6

. [25] Y. G. Kang, H. J. Joo and P. K. Rhee, "Real time head nod and shake detection using HMMs," International Conference on KnowledgeBased and Intelligent Information and Engineering Systems, pp. 707-714, 2006.

[26] C. Chris, "Our head movements convey emotions," 2015. [Online]. Available: https://www.mcgill.ca/newsroom/channels/news/ourhead-movements-convey-emotions-256366. [Accessed 17 8 2016].

[27] J. Foytik and V. K. Asari, "A two-layer framework for piecewise linear manifold-based head pose estimation," International journal of computer vision, vol. 101, no. 2, pp. 270--287, 2013.
https://doi.org/10.1007/s11263-012-0567-y

[28] S. Srinivasan and K. L. Boyer, "Head pose estimation using view based eigenspaces," 16th International Conference on Pattern Recognition, pp. 302--305, 2002.
https://doi.org/10.1109/icpr.2002.1047456

[29] E. Seemann, K. Nickel and R. Stiefelhagen, "Head pose estimation using stereo vision for human-robot interaction," Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings, pp. 626-631, May 2004.
https://doi.org/10.1109/afgr.2004.1301603

[30] R. Yang and Z. Zhang, "Model-based head pose tracking with stereovision," Fifth IEEE International Conference on Automatic Face and Gesture Recognition. Proceedings., pp. 255--260, 2002

. [31] H. a. K. T. Schneiderman, "A statistical method for 3D object detection applied to faces and cars," IEEE Conference on Computer Vision and Pattern Recognition., pp. 746--751, 2000.

[32] J. M. Rehg, M. Loughlin and K. Waters, "Vision for a smart kiosk," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 690--696, 1997.
https://doi.org/10.1109/cvpr.1997.609401

[33] D. Paul, "A speaker-stress resistant HMM isolated word recognizer," IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP'87, pp. 713--716, 1987.
https://doi.org/10.1109/ICASSP.1987.1169551

[34] A. Pentland, B. Moghaddam and T. Starner, "View-based and modular eigenspaces for face recognition," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR'94, pp. 84--91, 1994. [35] K. Nickel, E. Scemann and R. Stiefelhagen, "3D-tracking of head and hands for pointing gesture recognition in a human-robot interaction scenario," Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 565--570, 2004.


[36] F. Moreno, A. Tarrida, J. Andrade-Cetto and A. Sanfeliu, "3D real-time head tracking fusing color histograms and stereovision," International Conference on Pattern Recognition, pp. 368--371, 2002.
https://doi.org/10.1109/icpr.2002.1044727

[37] C. Morimoto, Y. Yacoob and L. Davis, "Recognition of head gestures using hidden Markov models," International Conference on Pattern Recognition, pp. 461--465, 1996.
https://doi.org/10.1109/ICPR.1996.546990

[38] O. Kwon, J. Chun and P. Park, "Cylindrical model-based head tracking and 3D pose recovery from sequential face images," International Conference on Hybrid Information Technology, ICHIT'06., 2006.

[39] T. Horprasert, Y. Yacoob and L. S. Davis, "Computing 3-d head orientation from a monocular image sequence," International Conference on Automatic Face and Gesture Recognition, pp. 242--247, 1996.
https://doi.org/10.1109/AFGR.1996.557271

[40] C. Huang, X. Ding and C. Fang, "Head pose estimation based on random forests for multiclass classification," International Conference on Pattern Recognition (ICPR), pp. 934--937, 2010.
https://doi.org/10.1109/icpr.2010.234

[41] G. Guo, Y. Fu, C. R. Dyer and T. S. Huang, "Head pose estimation: Classification or regression?," 19th International Conference on Pattern Recognition, pp. 1--4, 2008.
https://doi.org/10.1109/icpr.2008.4761081

[42] B. Heisele, P. Ho and T. Poggio, "Face recognition with support vector machines: Global versus component-based approach," IEEE International Conference on Computer Vision, pp. 688--694, 2001.
https://doi.org/10.1109/iccv.2001.937693

[43] B. Heiselet, T. Serre, M. Pontil and T. Poggio, "Component-based face detection," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. I--657, 2001.
https://doi.org/10.1109/cvpr.2001.990537

[44] S. Birchfield, "Elliptical head tracking using intensity gradients and color histograms," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 232--237, 1998.
https://doi.org/10.1109/cvpr.1998.698614

[45] L. M. Brown, "3D head tracking using motion adaptive texture-mapping," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. I--998, 2001.
https://doi.org/10.1109/cvpr.2001.990639

[46] G. J. Edwards, C. J. Taylor and T. F. Cootes, "Interpreting face images using active appearance models," IEEE International Conference on Automatic Face and Gesture Recognition, pp. 300--305, 1998.
https://doi.org/10.1109/AFGR.1998.670965

[47] R. El Kaliouby and P. Robinson, "Real-Time Inference of Complex Mantal States from Facial Expressions and Head Gestures," Computer Vision and Pattern Recognition workshop, pp. 154 - 154, 27 July 2004.

[48] N. Oliver, A. P. Pentland and F. Berard, "Lafter: Lips and face real time tracker," Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on, pp. 123--129, 1997.
https://doi.org/10.1109/cvpr.1997.609309

[49] G. C. Littlewort, M. S. Bartlett, L. P. Salamanca and J. Reilly, "Automated measurement of children's facial expressions during problem solving tasks," Automatic Face \& Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pp. 30--35, 2011.
https://doi.org/10.1109/fg.2011.5771418

[50] G. Littlewort, M. S. Bartlett and I. Fasel, "Dynamics of facial expression extracted automatically from video," Image and Vision Computing, vol. 24, no. 6, pp. 615--625, 2006.
https://doi.org/10.1016/j.imavis.2005.09.011

[51] J. J. Lien, T. Kanade, J. F. Cohn and C.-C. Li, "Automated facial expression recognition based on FACS action units," Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE International Conference on, pp. 390--395, 1998.
https://doi.org/10.1109/afgr.1998.670980

[51] J. J. Lien, T. Kanade, J. F. Cohn and C.-C. Li, "Automated facial expression recognition based on FACS action units," Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE International Conference on, pp. 390--395, 1998.
https://doi.org/10.1109/afgr.1998.670980

[52] U. M. Erdem and S. Sclaroff, "Automatic detection of relevant head gestures in American Sign Language communication," International Conference on Pattern Recognition, pp. 460--463, 2002.
https://doi.org/10.1109/icpr.2002.1044759

[53] L. P. Morency, P. Sundberg and T. Darrell, "Pose estimation using 3D view-based eigenspaces," IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp. 45--52, 2003.
https://doi.org/10.1109/amfg.2003.1240823

[54] R. El Kaliouby and P. Robinson, "Real Time Head Gesture Recognition in Affective Interfaces," Interact, 2003.

All views and opinions are the author's and do not necessarily reflected those of any organisation they are associated with. Twitter: @scottturneruon

Monday 27 March 2017

New Book: Connected Health Services in Smart Technologies


Al-Zoiny, S. and Al-Sherbaz, A. (2017) Connected Health Services in Smart Technologies. UK: Kobo Publisher. 1230001603163. 
https://www.kobo.com/gb/en/ebook/iqIkgVY1dDy-MShVWyQn-A

The emergence of smart phones, cloud computing, and networking on the Internet has created a type of consumer increasingly accustomed to doing everything using smartphones to check bank balances, purchases, watching movies on mobile devices, etc. From here these consumers wonder why health systems can not provide appropriate applications for similar service. Which led to the emergence of information technology companies working in the field of health that attract investment capital with the flexibility to design applications that meet the needs directly to groups of patients at the same time emerged obstacles for IT companies, notably lack of access to health data with no agreement on how to distribute the resulting economic benefits For smartphone applications and at the same time IT officials in search of the potential of technology in health care to answer the following basic questions:
  • Who should pay for applications and electronic services in the field of health?
  • What is the evidence of the effectiveness of the services provided by the application and which are the reason for paying the wages? 
  • What conditions should be available to be the starting point for developing health applications with a business model?


We believe that the solution is to strengthen cooperation between health providers and technical companies by enabling the exchange of health data to enable more efficient and adaptive health care delivery. The national health system must take into account that the framework in the area of health care data must be updated from the demand for standardized standards of patient health record to providing data access through application interfaces. The framework of the health electronic services system will be operated by accredited third parties and can be directed by the health system as well

Such as a database system that can revolutionize the provision of health services, as well as help health systems to boot to reduce the cost of this development, the stakeholders to determine how to distribute benefits and take into account five basic principles:

  • Potential effects of technology on health care systems
  • Organizational changes
  • Secure the correct data
  • Financing electronic health systems
  • Security and privacy of patient data



All views and opinions are the author's and do not necessarily reflected those of any organisation they are associated with. Twitter: @scottturneruon

Wednesday 22 March 2017

Mini project: Controllable (sort of) junkbots

What is a Junkbot?
It is basically, an electric motor that is unbalanced, so that is shakes and makes junk (e.g. a drinks can, a plastic cup) move. An example is shown below. As this blog is Computing in Northamptonshire it might be interesting to add-in some control (well sort of).

Three approaches to control it will be considered here:
- via Raspberry Pi;
- via Micro:Bit;
- via Crumble Controller.


1. Raspberry Pi based

One way is to combine a Raspberry Pi with a Junkbot in combination with Python and Pimoroni's Explorer HAT PRO to control it. 

First step, before the Explorer HAT can be used the appropriate library needs to be installed via the Terminal and the instructions below

curl get.pimoroni.com/explorerhat | bash

A simple bit of Python Code to control it is shown below.


import explorerhat
from time import sleep

def spin1(duration):
    explorerhat.motor.one.forward(100)
    sleep(duration)
    explorerhat.motor.one.stop()

def spin2(duration):
    explorerhat.motor.one.backward(100)
    sleep(duration)


    explorerhat.motor.one.stop()



2. Micro:bit
What about the recently released Micro:Bits, can it be used to control a junkbot?

2.1 Introduction
The project was to look into developing junk bots controlled using a Micro:Bit and also to produce some materials for schools to use with or without outside assistance.









2.2 Approach used in the project.
A Micro:Bit was selected for two reasons. 


An example piece of code is shown below. Press the buttons to spin the motor either anticlockwise or clockwise (depending on the wiring) and stop:

from microbit import *

def startIt():
   pin8.write_digital(1)
   pin12.write_digital(0)
   pin0.write_digital(1)
   pin16.write_digital(0)    

def leftTurn(duration):
   pin8.write_digital(0)
   pin12.write_digital(1)
   sleep(duration)
   
def stopIt():
   pin8.write_digital(1)
   pin12.write_digital(1)
   sleep(2000)

while True:
   startIt()
   
   if button_a.is_pressed():
       leftTurn(100)
   
   if button_b.is_pressed():
       stopIt()

2.3 Suggested Resource List
  • Small Electric Motor
  • Kitronik Motor Board
  • Battery Pack
  • BBC Micro:bit
  • Pens
  • Junk (Can or Bottle)
  • Wires
  • Tape
  • Scissors
  • Broken Propeller or un-balanced load
  • Screw Driver

3. Crumble



 The Crumble Controller, Redfern electronics, is an excellent board for this project; it is relatively cheap, it is programmable with it's own graphical language, and it has motor drivers built in. In the figure to the left the parts (apart from adhesive tape) used can be seen.


3.1. Wiring up
Using croc-clips ideally, but loops of wire if not, connect the battery to the controller and also the motors to the controller. Plug in the USB cable into the controller and the computer.

3.2. Running and Controlling
Make sure the Crumble software  (http://redfernelectronics.co.uk/crumble/) is installed on the computer. 

An example is shown below that drives the motor forward and then backward repeatedly. You might need to change the percentage values based on experiment, for the motor used. 




4. Future Directions

  • In all three approaches, a further motor can be driven so adding a second motor is one development.
  • Playing with a less well-featured motor driver board for the Micro:Bit or Raspberry Pi approaches may bring the cost down further.



Related Links



All views and opinions are the author's and do not necessarily reflected those of any organisation they are associated with. Twitter: @scottturneruon