2013年12月16日 星期一

opencv and dlib training

Refer to:
https://github.com/mrnugget/opencv-haar-classifier-training
https://github.com/davisking/dlib/tree/master/tools/imglab
https://speech.ee.ntu.edu.tw/~hylee/mlds/2015-fall.php
https://gitee.com/Heconnor/MRF
https://www.kaggle.com/mehmetlaudatekman/support-vector-machine-object-detection

#preinstall
wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | sudo tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null
sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ xenial main'
sudo apt update
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DE19EB17684BA42D
apt install cmake cmake-qt-gui cmake-curses-gui
apt install libjpeg8-dev libpng12-dev libgtk2.0-dev libxext-dev libopenblas-dev liblapack-dev
apt install libavcodec-dev libavutil-dev libavformat-dev libswscale-dev libavdevice-dev
apt install libv4l-dev zlib1g-dev
apt install ninja-build
pip3 install scikit-image

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////OpenCV/////////////////////////////////////////////////////////////
https://opencv.org/releases
download opencv-3.4.16.zip

remove libOpenCL* in /usr/lib/x86_64-linux-gnu
keep libOpenCL* in /usr/local/cuda-10.2/targets/x86_64-linux/lib

CPU_NUM=$(grep -c processor /proc/cpuinfo)
echo "CPU number = "$CPU_NUM

(a)
unzip ./opencv-3.4.16.zip
cd opencv-3.4.16
mkdir -p build
cd build

~~~if you have nvidia cuda~~~
cmake .. -G"Unix Makefiles" -DCMAKE_INSTALL_PREFIX=/opt/opencv -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_FLAGS_RELEASE="-O3 -g" -DCMAKE_CXX_FLAGS_RELEASE="-O3 -g" -DBUILD_PERF_TESTS=OFF -DBUILD_TESTS=OFF -DBUILD_JPEG=ON -DBUILD_PNG=ON -DBUILD_ZLIB=ON -DWITH_LIBV4L=ON -DBUILD_opencv_python3=OFF -DBUILD_EXAMPLES=ON -DWITH_GSTREAMER=ON -DWITH_CUDA=ON

~~~if you do not have nvidia cuda~~~
cmake .. -G"Unix Makefiles" -DCMAKE_INSTALL_PREFIX=/opt/opencv -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_FLAGS_RELEASE="-O3 -g" -DCMAKE_CXX_FLAGS_RELEASE="-O3 -g" -DBUILD_PERF_TESTS=OFF -DBUILD_TESTS=OFF -DBUILD_JPEG=ON -DBUILD_PNG=ON -DBUILD_ZLIB=ON -DWITH_LIBV4L=ON -DBUILD_opencv_python3=OFF -DBUILD_EXAMPLES=ON -DWITH_GSTREAMER=ON -DWITH_CUDA=OFF

~~~if you do not have nvidia cuda and want to VideoCapture out is RGB~~~
cmake .. -G"Unix Makefiles" -DCMAKE_INSTALL_PREFIX=/opt/opencv -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_FLAGS_RELEASE="-O3 -g" -DCMAKE_CXX_FLAGS_RELEASE="-O3 -g" -DBUILD_PERF_TESTS=OFF -DBUILD_TESTS=OFF -DBUILD_JPEG=ON -DBUILD_PNG=ON -DBUILD_ZLIB=ON -DWITH_LIBV4L=ON -DBUILD_opencv_python3=OFF -DBUILD_EXAMPLES=ON -DWITH_GSTREAMER=OFF -DWITH_CUDA=OFF

cmake --build . --config Release --target install -- -j$CPU_NUM VERBOSE=1

~~~if you do not have nvidia cuda and want to VideoCapture out is RGB and Debug version~~~
cmake .. -G"Unix Makefiles" -DCMAKE_INSTALL_PREFIX=/opt/opencv/debug -DCMAKE_BUILD_TYPE=Debug -DCMAKE_C_FLAGS_DEBUG="-O0 -g" -DCMAKE_CXX_FLAGS_DEBUG="-O0 -g" -DBUILD_PERF_TESTS=OFF -DBUILD_TESTS=OFF -DBUILD_JPEG=ON -DBUILD_PNG=ON -DBUILD_ZLIB=ON -DWITH_LIBV4L=ON -DBUILD_opencv_python3=ON -DBUILD_EXAMPLES=ON -DWITH_GSTREAMER=OFF -DWITH_CUDA=OFF

cmake --build . --config Debug --target install -- -j$CPU_NUM VERBOSE=1

(b)
cp -rf ~/opencv-3.4.16/samples/data ~/opencv-3.4.16/build
cd ~/opencv-3.4.16/build/bin
test and run 

(c)
git clone --recursive https://github.com/mrnugget/opencv-haar-classifier-training.git
cd opencv-haar-classifier-training

~modify this bug TypeError: a bytes-like object is required, not 'str'
https://github.com/mrnugget/opencv-haar-classifier-training/issues/39
gedit ./tools/mergevec.py
content = ''.join(str(line) for line in vecfile.readlines())
~all to
content = vecfile.read()

perl bin/createsamples.pl ./positives.txt ./negatives.txt samples 1 "opencv_createsamples -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 64 -h 64

python3 ./tools/mergevec.py -v samples/ -o samples.vecpython3 ./tools/mergevec.py -v samples/ -o samples.vec

opencv_traincascade -data classifier -vec samples.vec -bg negatives.txt -numStages 5 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 1 -numNeg 10 -w 64 -h 64 -mode ALL -precalcValBufSize 1024 -precalcIdxBufSize 1024

(d)
check source bin
~/opencv-3.4.16/samples/cpp/train_HOG.cpp
~/opencv-3.4.16/build/bin/example_cpp_train_HOG
~~~or~~~
wget https://github.com/sturkmen72/HOG-object-detection/blob/master/train_HOG.cpp
git clone --recursive https://github.com/SamPlvs/Object-detection-via-HOG-SVM

Hog + SVM dataset
wget ftp://ftp.inrialpes.fr/pub/lear/douze/data/INRIAPerson.tar

the INRIAPerson png need convert to libpng16 or libpng error: IDAT: invalid distance too far back
https://www.mediafire.com/file/j4tg5zr6uwqvo7s/libpng16.py

source code:
https://www.itread01.com/content/1548471077.html
https://docs.opencv.org/3.4/d0/df8/samples_2cpp_2train_HOG_8cpp-example.html
https://www.mediafire.com/file/d8x3ks495u8t874/opencv_svm_lion_py.tar.gz
https://www.mediafire.com/file/eza56as787e7opo/opencv_hogsvm_cpp.tar.gz

basic theory:
https://chih-sheng-huang821.medium.com/%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-%E6%94%AF%E6%92%90%E5%90%91%E9%87%8F%E6%A9%9F-support-vector-machine-svm-%E8%A9%B3%E7%B4%B0%E6%8E%A8%E5%B0%8E-c320098a3d2e

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/////////////////////////////////////////////////////////////Dlib/////////////////////////////////////////////////////////////
https://github.com/davisking/dlib/releases
download dlib-19.24.2.tar.gz
or
git clone --recursive https://github.com/davisking/dlib.git

CPU_NUM=$(grep -c processor /proc/cpuinfo)
echo "CPU number = "$CPU_NUM

(a)
cd dlib
mkdir build
cd build

cmake .. -G"Unix Makefiles"

~~~or check#cat /proc/cpuinfo | grep avx~~~
cmake .. -G"Unix Makefiles" -DCMAKE_INSTALL_PREFIX=/opt/dlib -DCMAKE_BUILD_TYPE=Release -DDLIB_USE_CUDA=ON -DUSE_SSE2_INSTRUCTIONS=ON -DUSE_SSE4_INSTRUCTIONS=ON -DUSE_AVX_INSTRUCTIONS=ON

cmake --build . --config Release --target install -- -j$CPU_NUM VERBOSE=1

~~~Debug Version~~~
cmake .. -G"Unix Makefiles" -DCMAKE_INSTALL_PREFIX=/opt/dlib/debug -DCMAKE_BUILD_TYPE=Debug -DCMAKE_C_FLAGS_DEBUG="-O0 -g" -DCMAKE_CXX_FLAGS_DEBUG="-O0 -g" -DDLIB_USE_CUDA=OFF -DUSE_SSE2_INSTRUCTIONS=OFF -DUSE_SSE4_INSTRUCTIONS=OFF -DUSE_AVX_INSTRUCTIONS=OFF

cmake --build . --config Debug --target install -- -j$CPU_NUM VERBOSE=1

(b)
cd ../examples
mkdir build
cd build

cmake .. -G"Unix Makefiles"

~~~or check#cat /proc/cpuinfo | grep avx~~~
cmake .. -G"Unix Makefiles" -DCMAKE_BUILD_TYPE=Release -DDLIB_USE_CUDA=ON -DUSE_SSE2_INSTRUCTIONS=ON -DUSE_SSE4_INSTRUCTIONS=ON -DUSE_AVX_INSTRUCTIONS=ON -DOpenCV_CONFIG_PATH=/opt/opencv/share/OpenCV -DOpenCV_DIR=/opt/opencv/share/OpenCV

cmake --build . --config Release -- -j$CPU_NUM VERBOSE=1

if build error check this:
https://github.com/davisking/dlib/issues/1734

(c)
cd ../../tools/imglab
mkdir build
cd build

~~~~~Linux~~~~~
cmake .. -G"Unix Makefiles" -DCMAKE_BUILD_TYPE=Release -DDLIB_USE_CUDA=OFF
cmake --build . --config Release --target install -- -j$CPU_NUM VERBOSE=1

~~~~~Windows and vs2015~~~~~
cmake .. -G"Visual Studio 14 2015" -A x64 -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_FLAGS_RELEASE="/Gm- /GS- /MT /MP /O2 /Ot" -DCMAKE_CXX_FLAGS_RELEASE="/Gm- /GS- /MT /MP /O2 /Ot" -DDLIB_USE_CUDA=OFF
cmake --build . --config Release

(d)
imglab -c ./mydataset.xml ./images
imglab ./mydataset.xml
~~press shift and mouse left button to box~~

(e) SVM train
dataset: https://www.idiap.ch/webarchives/sites/www.idiap.ch/resource/gestures
dlib format: https://www.mediafire.com/file/qte9drgxy4wr8yv/dlib_triesch_train_test.tar.gz

~~~c++ command
CPU_NUM=$(grep -c processor /proc/cpuinfo)
echo "CPU number = "$CPU_NUM
./train_object_detector -t ./dlib_train.xml -c 5 --eps 0.001 --threads $CPU_NUM -v

~~~python3 command
gedit  ./train_object_detector.py
~~~we train hand images they are not symmetry so add_left_right_image_flips=False
options.add_left_right_image_flips = False
options.C = 5                 #SVM cost
options.epsilon = 0.001 #SVM epsilon
options.num_threads = 12
options.be_verbose = True
training_xml_path = os.path.join(faces_folder, "dlib_train.xml")
testing_xml_path  = os.path.join(faces_folder, "dlib_test.xml")

python3 ./train_object_detector.py ./

////////////////////////////////////mmod loss function explain/////////////////////////////////////
Refer to:
https://arxiv.org/pdf/1502.00046.pdf
https://medium.com/analytics-vidhya/understanding-loss-functions-hinge-loss-a0ff112b40a1
~~~Hinge loss~~~
https://www.mediafire.com/file/uiqmky8wqvebqoc/Linear_Support_Vector_Machine.tar.gz

Algorithm 4 - Loss Augmented Detection of ~/dlib/dlib/dnn/loss.h
1.
tensor_to_dets(input_tensor, output_tensor, i, dets, -options.loss_per_false_alarm + det_thresh_speed_adjust, sub); //Sort D such that D1 >= D2 >= D3 >= ...

2.
for (size_t i = 0; i < dets.size() && final_dets.size() < max_num_dets; ++i) //for i = 1 to |D| do
{
    //if Di does not overlap {D[i-1],D[i-2],...} then
    if (overlaps_any_box_nms(final_dets, dets[i].rect_bbr)) 
        continue;

    const auto& det_label = options.detector_windows[dets[i].tensor_channel].label;

    const std::pair<double,unsigned int> hittruth = find_best_match(*truth, hit_truth_table,
    dets[i].rect, det_label);

    final_dets.push_back(dets[i].rect);

    const double truth_match = hittruth.first;

    // if hit truth rect
    if (truth_match > options.truth_match_iou_threshold) //if Di matches an element of y then
    {
        // if this is the first time we have seen a detect which hit (*truth)[hittruth.second]
        const double score = dets[i].detection_confidence;
        if (hit_truth_table[hittruth.second] == false)     //if hr = false then
        {
            hit_truth_table[hittruth.second] = true;         //hr := true
            truth_score_hits[hittruth.second] += score; //sr := <w, Φ(x,Di)>
        }
        else
        {
            //sr := sr + <w, Φ(x,Di)> + Lfa
            truth_score_hits[hittruth.second] += score + options.loss_per_false_alarm;
        }
    }
}

3.
//for i = 1 to |D| do
for (unsigned long i = 0; i < dets.size() && final_dets.size() < max_num_dets; ++i)
{
    if (overlaps_any_box_nms(final_dets, dets[i].rect_bbr)) //if Di does not overlap y*then
        continue;

    const auto& det_label = options.detector_windows[dets[i].tensor_channel].label;

    const std::pair<double,unsigned int> hittruth = find_best_match(*truth, hit_truth_table,
    dets[i].rect, det_label);
   
    const double truth_match = hittruth.first;
    if (truth_match > options.truth_match_iou_threshold) //if Di matches an element of y then
    {
        if (truth_score_hits[hittruth.second] > options.loss_per_missed_target) //if sr > Lmiss then
        {
            if (!hit_truth_table[hittruth.second])
            {
                hit_truth_table[hittruth.second] = true;
                final_dets.push_back(dets[i]);  //y* := y* U {Di}
                loss -= options.loss_per_missed_target;

                // Now account for BBR loss and gradient if appropriate.
                if (options.use_bounding_box_regression)
                {
                    ...
                }
            }
            else
            {
                final_dets.push_back(dets[i]);  //y* := y* U {Di}
                loss += options.loss_per_false_alarm;
            }
        }
    }
    else if (!overlaps_ignore_box(*truth, dets[i].rect))
    {
        // didn't hit anything
        final_dets.push_back(dets[i]);  //y* := y* U {Di}
        loss += options.loss_per_false_alarm;
    }
}

///////////////////////////////////////////////////////////////////////////////////////////////////////////
hog pca svm slider nms detection:
https://github.com/fatalfeel/hog_pca_svm_slider_nms_cpp

Hierarchical Clustering Agglomerative for face 128d
https://www.mediafire.com/file/l0wrg26gmtf7nep/cluster_number_test.cpp

video:
https://drive.google.com/file/d/1mjcgJeQaYJ1SckUI_3rhjRCeUuewYaLC

///////////////////////////////////////////////////////////////////////////////////////////////////////////
Tesla told in New York Herald: I prefer to be remembered as the inventor who succeeded in abolishing war. That will be my highest pride.
http://www.teslacollection.com/tesla_articles/1898/new_york_herald/f_l_christman/tesla_declares_he_will_abolish_war (in middle section)

Albert Einstein: The release of atom power has changed everything except our way of thinking... the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker.
https://atomictrauma.wordpress.com/the-scientists/albert-einstein

AI training will produce carbon footprint
https://arxiv.org/pdf/1906.02243.pdf

沒有留言:

張貼留言