本社群由Nantou.py使用者社群以及國立虎尾科技大學電機資訊學院負責維護,它是一群熱愛智慧生活科技以及Python的專業教師所組成,大家一同快樂地研究有關數位生活中人工智慧、大數據、物聯網、雲端服務、APPS、福祉科技、感知網路服務、車載網路服務、及網際網路等資通訊技術,並運用這些資通訊以及Python技術來提升我們的日常生活品質,建立更好的生活環境。
2013年1月17日 星期四
Fragment間的資料傳遞
在HelloFragment這篇我們在Androdi寫了第一支的Fragment程式,在生命週期這篇我們明白了Fragment生命週期,接著我們繼續討論Fragment與Fragment間的資料傳遞。
2013年1月14日 星期一
[ Android AR ] Android擴增實境教學資源分享
AndAR開放專案:http://code.google.com/p/andar/
在Android實現擴充實境的AndAR:http://www.linuxpilot.com/software/kiji/2011020701andar
測試AndAR中ModelChooser的錯誤:http://cheng-min-i-taiwan.blogspot.tw/2013/01/android-ar-andarmodelchooser.html
AndAR Android Augmented Reality影片:http://www.youtube.com/watch?v=MHkobjWqLA8
在Android實現擴充實境的AndAR:http://www.linuxpilot.com/software/kiji/2011020701andar
測試AndAR中ModelChooser的錯誤:http://cheng-min-i-taiwan.blogspot.tw/2013/01/android-ar-andarmodelchooser.html
AndAR Android Augmented Reality影片:http://www.youtube.com/watch?v=MHkobjWqLA8
ProAndroidAugmentedReality書範例程式:https://github.com/RaghavSood/ProAndroidAugmentedReality
ProAndroidAugmentedReality書:http://it-ebooks.info/book/1212/
2013年1月13日 星期日
[ Kinect ultra ] 抽象超人姿勢偵測類別剖析
在抽象超人姿勢偵測類別定義檔AbstractPowerPoseDetector.h內,宣告一個 HenshinDetector 型別的屬性m_henshinDetector,我們可以用這個屬性來得知鹹蛋超人變身情形,除了這個屬性外,最重要的是定義一個虛擬函式detect()以實現多形機制(polymorphism) ,多型是運用類別 間繼承的關係,使父類別函式可以當成子類別函式的通用型態。
。
抽象超人姿勢偵測類別定義檔AbstractPowerPoseDetector.h的程式列表:
#ifndef _ABSTRACT_POWER_POSE_DETECTOR_H_
#define _ABSTRACT_POWER_POSE_DETECTOR_H_
#include "common.h"
#include "AbstractPoseDetector.h"
#include "HenshinDetector.h"
class AbstractPowerPoseDetector : public AbstractPoseDetector
{
protected:
HenshinDetector* m_henshinDetector;
public:
AbstractPowerPoseDetector(HenshinDetector* henshinDetector);
virtual ~AbstractPowerPoseDetector();
virtual void detect();
};
#endif
在抽象超人姿勢偵測類別實作檔AbstractPowerPoseDetector.cpp內,我們可以看到在建構子AbstractPowerPoseDetector()函式中,儲存henshinDetector,並在detect()函式中使用,而且在使用getStage()函式來判斷是否已變身為鹹蛋超人,若是則返回,若不是則呼叫父類別
AbstractPoseDetector::detect()函式。
抽象超人姿勢偵測類別實作檔AbstractPowerPoseDetector.cpp的程式列表:
#include "AbstractPowerPoseDetector.h"
AbstractPowerPoseDetector::AbstractPowerPoseDetector(HenshinDetector* henshinDetector) :
AbstractPoseDetector(henshinDetector->getUserDetector())
{
m_henshinDetector = henshinDetector;
}
AbstractPowerPoseDetector::~AbstractPowerPoseDetector()
{
}
void AbstractPowerPoseDetector::detect()
{
if (m_henshinDetector->getStage() != HenshinDetector::STAGE_HENSHINED) {
return;
}
AbstractPoseDetector::detect();
}
。
抽象超人姿勢偵測類別定義檔AbstractPowerPoseDetector.h的程式列表:
#ifndef _ABSTRACT_POWER_POSE_DETECTOR_H_
#define _ABSTRACT_POWER_POSE_DETECTOR_H_
#include "common.h"
#include "AbstractPoseDetector.h"
#include "HenshinDetector.h"
class AbstractPowerPoseDetector : public AbstractPoseDetector
{
protected:
HenshinDetector* m_henshinDetector;
public:
AbstractPowerPoseDetector(HenshinDetector* henshinDetector);
virtual ~AbstractPowerPoseDetector();
virtual void detect();
};
#endif
在抽象超人姿勢偵測類別實作檔AbstractPowerPoseDetector.cpp內,我們可以看到在建構子AbstractPowerPoseDetector()函式中,儲存henshinDetector,並在detect()函式中使用,而且在使用getStage()函式來判斷是否已變身為鹹蛋超人,若是則返回,若不是則呼叫父類別
AbstractPoseDetector::detect()函式。
抽象超人姿勢偵測類別實作檔AbstractPowerPoseDetector.cpp的程式列表:
#include "AbstractPowerPoseDetector.h"
AbstractPowerPoseDetector::AbstractPowerPoseDetector(HenshinDetector* henshinDetector) :
AbstractPoseDetector(henshinDetector->getUserDetector())
{
m_henshinDetector = henshinDetector;
}
AbstractPowerPoseDetector::~AbstractPowerPoseDetector()
{
}
void AbstractPowerPoseDetector::detect()
{
if (m_henshinDetector->getStage() != HenshinDetector::STAGE_HENSHINED) {
return;
}
AbstractPoseDetector::detect();
}
[ Kinect ultra ] 抽象姿勢偵測類別的剖析
抽象姿勢偵測類別(AbstractPoseDetector)主要是實作detect()函式,供外界呼叫使用,而在detect()函式中,會呼叫onDetectPre()、isPosing()、onPoseDetected()、onDetectPost()等4個函式,這4個函式採用掛勾方式供子類別來實作,換言之,在子類別中不需要修改偵測函式的動作流程,只需要負責將這些掛勾函式實現出來即可。這四個函式作用分述如下:
onDetectPre():偵測前要什那些工作。
isPosing():只否為我們要的姿勢 ?
onPoseDetected():姿勢偵測出來後要做什麼。
onDetectPost():偵測後要做那些工作。
m_userDetector物件是用來紀錄使用者的資訊。
AbstractPoseDetector.h 程式表列
#ifndef _ABSTRACT_POSE_DETECTOR_H_
#define _ABSTRACT_POSE_DETECTOR_H_
#include "common.h"
#include "UserDetector.h"
#include "TimeTicker.h"
class AbstractPoseDetector
{
protected:
UserDetector* m_userDetector;
private:
// TODO: should be time instead of frame count
float m_requiredPosingStability;
float m_posingTime;
TimeTicker m_ticker;
public:
AbstractPoseDetector(UserDetector* userDetector);
virtual ~AbstractPoseDetector();
virtual void detect();
protected:
void setRequiredPosingStability(float value) { m_requiredPosingStability = value; }
virtual bool isPosing(float dt);
virtual void onPoseDetected(float dt);
virtual void onDetectPre(float dt);
virtual void onDetectPost(float dt);
};
#endif
#include "AbstractPoseDetector.h"
AbstractPoseDetector::AbstractPoseDetector(UserDetector* userDetector)
{
m_userDetector = userDetector;
m_requiredPosingStability = 0;
m_posingTime = 0;
}
AbstractPoseDetector::~AbstractPoseDetector()
{
}
void AbstractPoseDetector::detect()
{
XuUserID userID = m_userDetector->getTrackedUserID();
if (!userID) {
return;
}
float dt = m_ticker.tick();
onDetectPre(dt);
if (isPosing(dt)) {
if (m_posingTime < m_requiredPosingStability) {
m_posingTime += dt;
}
if (m_posingTime >= m_requiredPosingStability) {
onPoseDetected(dt); }
} else {
if (m_posingTime > 0) {
m_posingTime = std::max(m_posingTime - dt, 0.0f);
}
}
onDetectPost(dt);}
bool AbstractPoseDetector::isPosing(float dt)
{
return false;
}
void AbstractPoseDetector::onPoseDetected(float dt)
{
}
void AbstractPoseDetector::onDetectPre(float dt)
{
}
void AbstractPoseDetector::onDetectPost(float dt)
{
}
onDetectPre():偵測前要什那些工作。
isPosing():只否為我們要的姿勢 ?
onPoseDetected():姿勢偵測出來後要做什麼。
onDetectPost():偵測後要做那些工作。
m_userDetector物件是用來紀錄使用者的資訊。
AbstractPoseDetector.h 程式表列
#ifndef _ABSTRACT_POSE_DETECTOR_H_
#define _ABSTRACT_POSE_DETECTOR_H_
#include "common.h"
#include "UserDetector.h"
#include "TimeTicker.h"
class AbstractPoseDetector
{
protected:
UserDetector* m_userDetector;
private:
// TODO: should be time instead of frame count
float m_requiredPosingStability;
float m_posingTime;
TimeTicker m_ticker;
public:
AbstractPoseDetector(UserDetector* userDetector);
virtual ~AbstractPoseDetector();
virtual void detect();
protected:
void setRequiredPosingStability(float value) { m_requiredPosingStability = value; }
virtual bool isPosing(float dt);
virtual void onPoseDetected(float dt);
virtual void onDetectPre(float dt);
virtual void onDetectPost(float dt);
};
#endif
#include "AbstractPoseDetector.h"
AbstractPoseDetector::AbstractPoseDetector(UserDetector* userDetector)
{
m_userDetector = userDetector;
m_requiredPosingStability = 0;
m_posingTime = 0;
}
AbstractPoseDetector::~AbstractPoseDetector()
{
}
void AbstractPoseDetector::detect()
{
XuUserID userID = m_userDetector->getTrackedUserID();
if (!userID) {
return;
}
float dt = m_ticker.tick();
onDetectPre(dt);
if (isPosing(dt)) {
if (m_posingTime < m_requiredPosingStability) {
m_posingTime += dt;
}
if (m_posingTime >= m_requiredPosingStability) {
onPoseDetected(dt); }
} else {
if (m_posingTime > 0) {
m_posingTime = std::max(m_posingTime - dt, 0.0f);
}
}
onDetectPost(dt);}
bool AbstractPoseDetector::isPosing(float dt)
{
return false;
}
void AbstractPoseDetector::onPoseDetected(float dt)
{
}
void AbstractPoseDetector::onDetectPre(float dt)
{
}
void AbstractPoseDetector::onDetectPost(float dt)
{
}
2013年1月12日 星期六
[ Kinect ultra ] 設計就從建構抽象化類別開始
抽象化在設計程式中是很重要的觀念,可以參考維基百科中的抽象化的定義,抽象化就是縮減,去掉不必要的行為,只保留基本的屬性及行為。抽象化在計算機科學是指穩藏實作細節,降低程式的複雜性。
在kinect ultra專案中,就可看到下列幾個重要的抽象化類別:
1. AbstractOpenGLRenderer:渲染(Render)是應用程式將模型生成影像的過成,OpenGL(全名是Open Graphics Library)是一個跨程式語言及跨平台的程式介面。在AbstractOpenGLRenderer類別中定義一個m_rctx物件,其類別為RenderingContext。
class AbstractOpenGLRenderer
{
protected:
RenderingContext* m_rctx;
public:
AbstractOpenGLRenderer(RenderingContext* rctx);
virtual ~AbstractOpenGLRenderer() = 0;
};
2. AbstractTextureRenderer:抽象材質貼圖渲染類號,此類別有兩個父類別,分別是AbstractOpenGLRenderer和Configurable。
class AbstractTextureRenderer : public AbstractOpenGLRenderer, protected Configurable
{
protected:
int m_textureWidth;
int m_textureHeight;
XuColorPixel* m_textureData;
cv::Rect m_imageRect;
GLuint m_textureID;
GLBatch m_batch;
M3DMatrix44f m_orthoProjectionMatrix;
bool m_isLocked;
public:
AbstractTextureRenderer(RenderingContext* rctx);
virtual ~AbstractTextureRenderer() = 0;
virtual void draw();
void lock(bool value);
bool isLocked() { return m_isLocked; }
protected:
void init(const cv::Rect& imageRect);
void setupBatch();
// optionally overridable
virtual void setupTexture();
virtual void executeDraw();
// need to override
virtual void setupCopy() = 0;
virtual void copyRow(XuColorPixel* dst, int srcOffset) = 0;
virtual void finalizeCopy() = 0;
};
3. AbstractElementRenderer:抽象元件渲染類別也是AbstractOpenGLRenderer的子類別,在該類別使串列(List)結構來儲存各元件。
template class AbstractElementRenderer : public AbstractOpenGLRenderer
{
protected:
GLuint m_textureID;
TimeTicker m_ticker;
std::list m_elements;
float m_gravity;
public:
AbstractElementRenderer(RenderingContext* rctx, const char* alphaTextureFile, float gravity)
: AbstractOpenGLRenderer(rctx)
{
m_textureID = readAlphaTexture(alphaTextureFile);
m_gravity = gravity;
}
virtual ~AbstractElementRenderer()
{
}
virtual void draw()
{
executeDraw();
float dt = m_ticker.tick();
if (dt > 0.0f) {
progress(dt);
}
}
protected:
void setupTextureParameters()
{
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, m_textureID);
}
virtual void executeDraw() = 0;
void progress(float dt)
{
std::list::iterator i = m_elements.begin();
while (i != m_elements.end()) {
i->p += i->v * dt;
i->v.Y -= m_gravity * dt;
onProgress(&(*i), dt);
i->lifeTime -= dt;
if (i->lifeTime <= 0.0f) {
i = m_elements.erase(i);
} else {
i++;
}
}
}
virtual void onProgress(ElementType* element, float dt)
{
}
};
4. AbstractPoseDetector:抽象姿勢偵測類別。
class AbstractPoseDetector
{
protected:
UserDetector* m_userDetector;
private:
// TODO: should be time instead of frame count
float m_requiredPosingStability;
float m_posingTime;
TimeTicker m_ticker;
public:
AbstractPoseDetector(UserDetector* userDetector);
virtual ~AbstractPoseDetector();
virtual void detect();
protected:
void setRequiredPosingStability(float value) { m_requiredPosingStability = value; }
virtual bool isPosing(float dt);
virtual void onPoseDetected(float dt);
virtual void onDetectPre(float dt);
virtual void onDetectPost(float dt);
};
5. AbstractPowerPoseDetector:抽象變身偵測類別
class AbstractPowerPoseDetector : public AbstractPoseDetector
{
protected:
HenshinDetector* m_henshinDetector;
public:
AbstractPowerPoseDetector(HenshinDetector* henshinDetector);
virtual ~AbstractPowerPoseDetector();
virtual void detect();
};
6. AbstractEmeriumBeamDetector:艾美利母光束照射偵測
class AbstractEmeriumBeamDetector : public AbstractPowerPoseDetector, protected Configurable
{
private:
AbstractSimpleBeamRenderer* m_beamRenderer;
public:
AbstractEmeriumBeamDetector(DepthProvider* depthProvider, HenshinDetector* henshinDetector, AbstractSimpleBeamRenderer* beamRenderer);
virtual ~AbstractEmeriumBeamDetector();
protected:
void shootBeam(const XV3& p, const XV3& dv);
};
7. AbstractSensorDataProvider:抽象感測資料提供類別
class AbstractSensorDataProvider
{
private:
static const DWORD TIMEOUT = 1000;
protected:
INuiSensor* m_pSensor;
HANDLE m_hNextFrameEvent;
bool m_isLocked;
public:
AbstractSensorDataProvider(INuiSensor* pSensor);
virtual ~AbstractSensorDataProvider() = 0;
bool waitForNextFrameAndLock();
void unlock();
protected:
virtual bool waitForNextFrameAndLockImpl(DWORD timeout) = 0;
virtual void unlockImpl() = 0;
};
8. AbstractImageStreamProvider:抽象影像串流提供類別
class AbstractImageStreamProvider : public AbstractSensorDataProvider
{
protected:
HANDLE m_hStream;
NUI_IMAGE_FRAME m_frame;
NUI_LOCKED_RECT m_lockedRect;
public:
AbstractImageStreamProvider(INuiSensor* pSensor) : AbstractSensorDataProvider(pSensor), m_hStream(NULL)
{
}
virtual ~AbstractImageStreamProvider() {
}
};
在kinect ultra專案中,就可看到下列幾個重要的抽象化類別:
1. AbstractOpenGLRenderer:渲染(Render)是應用程式將模型生成影像的過成,OpenGL(全名是Open Graphics Library)是一個跨程式語言及跨平台的程式介面。在AbstractOpenGLRenderer類別中定義一個m_rctx物件,其類別為RenderingContext。
class AbstractOpenGLRenderer
{
protected:
RenderingContext* m_rctx;
public:
AbstractOpenGLRenderer(RenderingContext* rctx);
virtual ~AbstractOpenGLRenderer() = 0;
};
2. AbstractTextureRenderer:抽象材質貼圖渲染類號,此類別有兩個父類別,分別是AbstractOpenGLRenderer和Configurable。
class AbstractTextureRenderer : public AbstractOpenGLRenderer, protected Configurable
{
protected:
int m_textureWidth;
int m_textureHeight;
XuColorPixel* m_textureData;
cv::Rect m_imageRect;
GLuint m_textureID;
GLBatch m_batch;
M3DMatrix44f m_orthoProjectionMatrix;
bool m_isLocked;
public:
AbstractTextureRenderer(RenderingContext* rctx);
virtual ~AbstractTextureRenderer() = 0;
virtual void draw();
void lock(bool value);
bool isLocked() { return m_isLocked; }
protected:
void init(const cv::Rect& imageRect);
void setupBatch();
// optionally overridable
virtual void setupTexture();
virtual void executeDraw();
// need to override
virtual void setupCopy() = 0;
virtual void copyRow(XuColorPixel* dst, int srcOffset) = 0;
virtual void finalizeCopy() = 0;
};
3. AbstractElementRenderer:抽象元件渲染類別也是AbstractOpenGLRenderer的子類別,在該類別使串列(List)結構來儲存各元件。
template
{
protected:
GLuint m_textureID;
TimeTicker m_ticker;
std::list
float m_gravity;
public:
AbstractElementRenderer(RenderingContext* rctx, const char* alphaTextureFile, float gravity)
: AbstractOpenGLRenderer(rctx)
{
m_textureID = readAlphaTexture(alphaTextureFile);
m_gravity = gravity;
}
virtual ~AbstractElementRenderer()
{
}
virtual void draw()
{
executeDraw();
float dt = m_ticker.tick();
if (dt > 0.0f) {
progress(dt);
}
}
protected:
void setupTextureParameters()
{
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, m_textureID);
}
virtual void executeDraw() = 0;
void progress(float dt)
{
std::list
while (i != m_elements.end()) {
i->p += i->v * dt;
i->v.Y -= m_gravity * dt;
onProgress(&(*i), dt);
i->lifeTime -= dt;
if (i->lifeTime <= 0.0f) {
i = m_elements.erase(i);
} else {
i++;
}
}
}
virtual void onProgress(ElementType* element, float dt)
{
}
};
4. AbstractPoseDetector:抽象姿勢偵測類別。
class AbstractPoseDetector
{
protected:
UserDetector* m_userDetector;
private:
// TODO: should be time instead of frame count
float m_requiredPosingStability;
float m_posingTime;
TimeTicker m_ticker;
public:
AbstractPoseDetector(UserDetector* userDetector);
virtual ~AbstractPoseDetector();
virtual void detect();
protected:
void setRequiredPosingStability(float value) { m_requiredPosingStability = value; }
virtual bool isPosing(float dt);
virtual void onPoseDetected(float dt);
virtual void onDetectPre(float dt);
virtual void onDetectPost(float dt);
};
5. AbstractPowerPoseDetector:抽象變身偵測類別
class AbstractPowerPoseDetector : public AbstractPoseDetector
{
protected:
HenshinDetector* m_henshinDetector;
public:
AbstractPowerPoseDetector(HenshinDetector* henshinDetector);
virtual ~AbstractPowerPoseDetector();
virtual void detect();
};
6. AbstractEmeriumBeamDetector:艾美利母光束照射偵測
class AbstractEmeriumBeamDetector : public AbstractPowerPoseDetector, protected Configurable
{
private:
AbstractSimpleBeamRenderer* m_beamRenderer;
public:
AbstractEmeriumBeamDetector(DepthProvider* depthProvider, HenshinDetector* henshinDetector, AbstractSimpleBeamRenderer* beamRenderer);
virtual ~AbstractEmeriumBeamDetector();
protected:
void shootBeam(const XV3& p, const XV3& dv);
};
7. AbstractSensorDataProvider:抽象感測資料提供類別
class AbstractSensorDataProvider
{
private:
static const DWORD TIMEOUT = 1000;
protected:
INuiSensor* m_pSensor;
HANDLE m_hNextFrameEvent;
bool m_isLocked;
public:
AbstractSensorDataProvider(INuiSensor* pSensor);
virtual ~AbstractSensorDataProvider() = 0;
bool waitForNextFrameAndLock();
void unlock();
protected:
virtual bool waitForNextFrameAndLockImpl(DWORD timeout) = 0;
virtual void unlockImpl() = 0;
};
8. AbstractImageStreamProvider:抽象影像串流提供類別
class AbstractImageStreamProvider : public AbstractSensorDataProvider
{
protected:
HANDLE m_hStream;
NUI_IMAGE_FRAME m_frame;
NUI_LOCKED_RECT m_lockedRect;
public:
AbstractImageStreamProvider(INuiSensor* pSensor) : AbstractSensorDataProvider(pSensor), m_hStream(NULL)
{
}
virtual ~AbstractImageStreamProvider() {
}
};
[ Kinect ultra ] 體感應用程式必須考量三個重要元素
開放式源始碼雖然可以拿到很不錯的完整程式,但作者往往只提供程式,對初學者而言,很難掌握其設計原理,對於鹹蛋超人體感創意開放式源始碼要一樣(http://code.google.com/p/kinect-ultra/source/browse/#svn%2Ftrunk),幸好作者在原始碼中放了一張UML設計圖,稱作/kinect-ultra-design.pdf(http://code.google.com/p/kinect-ultra/source/browse/trunk/kinect-ultra-design.pdf),根據敏哥分析可以把它歸類成為三大區塊:
1. 影像輸出區塊
2. 姿勢偵測區塊
註:本文所使用圖片是摘自於http://code.google.com/p/kinect-ultra/source/browse/trunk/kinect-ultra-design.pdf。
1. 影像輸出區塊
2. 姿勢偵測區塊
3. Kinect感測區塊
經由上面三區塊的程式,我們可以瞭解在設計體感程式中要考慮的三元素分別是輸出、姿勢、感測。
[ Kinect ultra ] 鹹蛋超人體感創意開放式源始碼
專案網址:http://code.google.com/p/kinect-ultra/
測試影片1:http://www.youtube.com/watch?feature=player_embedded&v=RUG-Uvq-J-w
測試影片2:http://www.youtube.com/watch?feature=player_embedded&v=Uuq9SCL_LXY
OpenNI測試程式下載網址:http://code.google.com/p/kinect-ultra/downloads/detail?name=kinect-ultra_1.0a_for_OpenNI.zip&can=2&q=
Kinect測試程式下載網址:http://code.google.com/p/kinect-ultra/downloads/detail?name=kinect-ultra_1.0a_for_KinectSDK.zip&can=2&q=
原始程式下載網址:http://code.google.com/p/kinect-ultra/source/checkout
鹹蛋超人類別設計圖:http://code.google.com/p/kinect-ultra/source/browse/trunk/kinect-ultra-design.pdf
鹹蛋超人絕招狀態圖:http://code.google.com/p/kinect-ultra/source/browse/trunk/eye-slugger-states.pdf
測試影片1:http://www.youtube.com/watch?feature=player_embedded&v=RUG-Uvq-J-w
OpenNI測試程式下載網址:http://code.google.com/p/kinect-ultra/downloads/detail?name=kinect-ultra_1.0a_for_OpenNI.zip&can=2&q=
Kinect測試程式下載網址:http://code.google.com/p/kinect-ultra/downloads/detail?name=kinect-ultra_1.0a_for_KinectSDK.zip&can=2&q=
原始程式下載網址:http://code.google.com/p/kinect-ultra/source/checkout
鹹蛋超人類別設計圖:http://code.google.com/p/kinect-ultra/source/browse/trunk/kinect-ultra-design.pdf
鹹蛋超人絕招狀態圖:http://code.google.com/p/kinect-ultra/source/browse/trunk/eye-slugger-states.pdf
2013年1月6日 星期日
[ Android AR ] 測試AndAR中ModelChooser的錯誤
1.執行程式
2.選擇一個模型就當機
3.觀察LogCat找出原因
4.上網找答案
5.前往說明問題的網址
6.在該網站中有張圖片說明差異
7.把lib下檔案複製到libs並剛除lib目錄。
修改後
修定前
修定後
9.執行成功
訂閱:
文章 (Atom)