Commit 19aded91 by amir

new project

parent 178e53cd
Showing with 4855 additions and 0 deletions
*.o
*.dSYM
*.csv
*.out
*.png
*.jpg
*.pyc
old/
mnist/
data/
caffe/
grasp/
images/
opencv/
convnet/
decaf/
submission/
cfg/
darknet
.fuse*
# OS Generated #
.DS_Store*
ehthumbs.db
Icon?
Thumbs.db
*.swp
YOLO LICENSE
Version 2, July 29 2016
THIS SOFTWARE LICENSE IS PROVIDED "ALL CAPS" SO THAT YOU KNOW IT IS SUPER
SERIOUS AND YOU DON'T MESS AROUND WITH COPYRIGHT LAW BECAUSE YOU WILL GET IN
TROUBLE HERE ARE SOME OTHER BUZZWORDS COMMONLY IN THESE THINGS WARRANTIES
LIABILITY CONTRACT TORT LIABLE CLAIMS RESTRICTION MERCHANTABILITY. NOW HERE'S
THE REAL LICENSE:
0. Darknet is public domain.
1. Do whatever you want with it.
2. Stop emailing me about it!
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.
RNN LICENSE Version 3, June 21 2017
Copyright (c) 1990, 1989, 1999 Free87337 May 48 THIRD PARTIES OR ANY OTHER THE
COMPLAIN OR CONSEQUENTIAL DAMAGES AND REGARDLESS OF WHETHER IN CONTRACT, TO THE
EXTENT REPAIR OR AGENTS (NOT THE IN ANY EVENT). THE SOFTWARE WILL BE
UNINTERRUPTED OR ERROR-FREE OR ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF ALL THE WORK (GOVERNED CODE) HIM RESPONSES, OR OF FINES,
SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR ANY OTHER OR OTHER HARL UNDER NO
CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT (INCLUDING NEGLIGENCE),
PATENT PERMITTED BY THE INSTAGRAM PARENT STATE OR TORT (INCLUDING NEGLIGENCE),
PRODUCT LIABILITY OR OTHERWISE, ARISING OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR ANYTHING PROVIDED IN THIS PRODUCT, COMMIS AND SERVICES
ARE LICENSED SOFTWARE AND ANY RESULE OR ANY OTHER THE COPYRIGHT HOLDERS BE
LIABLE FOR ANY SPECIAL, INCIDENTAL, CASE, SUCH WARRANTIES, EXPRESS OR IMPLIED,
INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COPYRIGHT HOLDERS AND/OR ANY
PERSON FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY
EXPRESS OR DISTRIBUTE THAT ALL CLAIMS ARE SHALL CREATE DERAVE BE LIABLE TO YOU
WILL HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
6\. TERMINATION. TO THE EXTENT PERMITTED BY LAW, NO USE OF THE COVERED CODE IS
WITH YOU. SHOULD ANY COVERED CODE PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE
INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY
SERVICING, REPAIR OR COULT OR IN ANY WAY OUT OF THE USE OF THE WEBSITES OR
SERVICE WILL BE CONSEQUENTIAL DAMAGES OF ANY KIND HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
This paragraph Agreement constitutes the entire agreement between the parties
with respect to the Work licensed here. However, if you place the name of the
fact that the arbitration was the consultation of the parties as a "patent is".
Subject to the terms and conditions of this License, Contributor has knowledge
that a license under a third party may also be used to endorse or promote
products derived from the Work, and there is no warranty on the Software and
Science Fees. For the purposes of this Agreement, attach the following
disclaimers (without liabilities of written notice to the Subject Software) in a
manner that a product is under common control with you. The Free Software
Foundation may publish revised and/or new versions of the License for the
Modifications made by the applicable terms. The Recipient shall promptly retain
the covered works for any reason be entered in any federal or state or login
Restricted Laws appearing in the United States or any of its own information
that is not disabled from a derivative work except as expressly permitted in
this License, to the extent that they are in receiving the Software and Source
Code or any exercise of the rights granted to You by this License or a
Contributor made by the Licensor or are authorized to make a reasonable
retirement by the courts of the courts located in Santa Clara County, California
printed and related to the Work or “Company” and Apache Software Foundation. If
the Licensor shall be entitled to reflect your rights to use the Software and
the Software to exercise the rights granted to the recipient without a
requirement to exercise the rights granted by the Agreement to the provision
will begin will appear in such cases, you will use such information without such
corporation shall be an officer with respect to any part of the Software or any
portion thereof. Capitalized terms are included in the Initial Contributor and
under no circumstances will license the Service at any time and for any direct,
indirect, special, incidental, or consequential damages of or assist in
connection with any Services or the registration purposes only to the extent
that it includes any or all means including the processing of which you download
any derivative work. Any of the purchases’ transmission purposes are made
available, if any, in other circumstances, we may review the copyright notice.
In the event that this Agreement is required to give us strict content. The
inclusion of the other party hereunder may also notify you Intellectual Property
Rights to any third party. This means that the Source Code exists of the Work
will not charge a program available to you at any time. You must include a
prominent statement that the Software is governed under a particular version of
this Agreement. You must include a provision to the extent that there is no
warranty for the content of others. You agree that the Recipient was appointed
as a Contributor, (c) are effective until terminated by hereunder, then the
registration are not disabled and not limited to, submit any Customer Data
without the updated use of the Software and that no fee is released. You grant
to Use Other Arbitration Rules for Diagnostic or Services may use or modify the
Apple Software and Consolidated Apple Software or Services. The Company may have
full risk as a product of the Compatible Source. A Contribution by the Licensor
or by the updated Software under the following conditions we can redistribute
any General Provision of this Agreement. If the Program is used in accordance
with the terms of this Agreement, Customer may provide advertisements from your
devices that clause you can your employer or a transaction or country that has
been controlled by the arbitrator, that they will be useful of this Agreement.
The term "Open Source Software is available in connection with the program, and
you may not protect the combination of the Covered Code. You should like to
select a user's rights to charge a copy of this License. I are Contributor's
confidentiality of the exercise of the rights granted herein. Such a covered
work is released as a consequence, the Licensor shall be eligible for a purpose
or subcontractor of the person or entity to the user of the user, then the word
"Application" means having the original fee for any reason; and that no patent
license to more than fifty stated close of the license term. The terms of this
License will the license terms and conditions set forth in Section 2.2 (OPEC)
and You will not use the Software or any set of responsibility for any resulting
information that the Original Code warrants that you have the right to disclose
these information (or in the notification; or (iii) late use of the software or
any third party to the three (50) days before such belief to the extent that it
includes a court court obtains the rights granted by this License.
META-LICENSE
Version 1, June 21 2017
Any and all licenses may be applied to the software either individually
or in concert. Any issues, ambiguities, paradoxes, or metaphysical quandries
arising from this combination should be discussed with a local faith leader,
hermit, or guru. The Oxford comma shall be used.
MIT License
Copyright (c) 2017 Joseph Redmon
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
YOLO LICENSE
Version 1, July 10 2015
THIS SOFTWARE LICENSE IS PROVIDED "ALL CAPS" SO THAT YOU KNOW IT IS SUPER
SERIOUS AND YOU DON'T MESS AROUND WITH COPYRIGHT LAW BECAUSE YOU WILL GET IN
TROUBLE HERE ARE SOME OTHER BUZZWORDS COMMONLY IN THESE THINGS WARRANTIES
LIABILITY CONTRACT TORT LIABLE CLAIMS RESTRICTION MERCHANTABILITY SUBJECT TO
THE FOLLOWING CONDITIONS:
1. #yolo
2. #swag
3. #blazeit
GPU=0
CUDNN=0
OPENCV=0
OPENMP=0
DEBUG=0
ARCH= -gencode arch=compute_20,code=[sm_20,sm_21] \
-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=[sm_50,compute_50] \
-gencode arch=compute_52,code=[sm_52,compute_52]
# This is what I use, uncomment if you know your arch and want to specify
# ARCH= -gencode arch=compute_52,code=compute_52
VPATH=./src/:./examples
SLIB=libdarknet.so
ALIB=libdarknet.a
EXEC=darknet
OBJDIR=./obj/
CC=gcc
NVCC=nvcc
AR=ar
ARFLAGS=rcs
OPTS=-Ofast
LDFLAGS= -lm -pthread
COMMON= -Iinclude/ -Isrc/
CFLAGS=-Wall -Wno-unknown-pragmas -Wfatal-errors -fPIC
ifeq ($(OPENMP), 1)
CFLAGS+= -fopenmp
endif
ifeq ($(DEBUG), 1)
OPTS=-O0 -g
endif
CFLAGS+=$(OPTS)
ifeq ($(OPENCV), 1)
COMMON+= -DOPENCV
CFLAGS+= -DOPENCV
LDFLAGS+= `pkg-config --libs opencv`
COMMON+= `pkg-config --cflags opencv`
endif
ifeq ($(GPU), 1)
COMMON+= -DGPU -I/usr/local/cuda/include/
CFLAGS+= -DGPU
LDFLAGS+= -L/usr/local/cuda/lib64 -lcuda -lcudart -lcublas -lcurand
endif
ifeq ($(CUDNN), 1)
COMMON+= -DCUDNN
CFLAGS+= -DCUDNN
LDFLAGS+= -lcudnn
endif
OBJ=gemm.o utils.o cuda.o deconvolutional_layer.o convolutional_layer.o list.o image.o activations.o im2col.o col2im.o blas.o crop_layer.o dropout_layer.o maxpool_layer.o softmax_layer.o data.o matrix.o network.o connected_layer.o cost_layer.o parser.o option_list.o detection_layer.o route_layer.o box.o normalization_layer.o avgpool_layer.o layer.o local_layer.o shortcut_layer.o activation_layer.o rnn_layer.o gru_layer.o crnn_layer.o demo.o batchnorm_layer.o region_layer.o reorg_layer.o tree.o lstm_layer.o
EXECOBJA=captcha.o lsd.o super.o voxel.o art.o tag.o cifar.o go.o rnn.o rnn_vid.o compare.o segmenter.o regressor.o classifier.o coco.o dice.o yolo.o detector.o writing.o nightmare.o swag.o darknet.o
ifeq ($(GPU), 1)
LDFLAGS+= -lstdc++
OBJ+=convolutional_kernels.o deconvolutional_kernels.o activation_kernels.o im2col_kernels.o col2im_kernels.o blas_kernels.o crop_layer_kernels.o dropout_layer_kernels.o maxpool_layer_kernels.o network_kernels.o avgpool_layer_kernels.o
endif
EXECOBJ = $(addprefix $(OBJDIR), $(EXECOBJA))
OBJS = $(addprefix $(OBJDIR), $(OBJ))
DEPS = $(wildcard src/*.h) Makefile include/darknet.h
#all: obj backup results $(SLIB) $(ALIB) $(EXEC)
all: obj results $(SLIB) $(ALIB) $(EXEC)
$(EXEC): $(EXECOBJ) $(ALIB)
$(CC) $(COMMON) $(CFLAGS) $^ -o $@ $(LDFLAGS) $(ALIB)
$(ALIB): $(OBJS)
$(AR) $(ARFLAGS) $@ $^
$(SLIB): $(OBJS)
$(CC) $(CFLAGS) -shared $^ -o $@ $(LDFLAGS)
$(OBJDIR)%.o: %.c $(DEPS)
$(CC) $(COMMON) $(CFLAGS) -c $< -o $@
$(OBJDIR)%.o: %.cu $(DEPS)
$(NVCC) $(ARCH) $(COMMON) --compiler-options "$(CFLAGS)" -c $< -o $@
obj:
mkdir -p obj
backup:
mkdir -p backup
results:
mkdir -p results
.PHONY: clean
clean:
rm -rf $(OBJS) $(SLIB) $(ALIB) $(EXEC) $(EXECOBJ)
![Darknet Logo](http://pjreddie.com/media/files/darknet-black-small.png)
#Darknet#
Darknet is an open source neural network framework written in C and CUDA. It is fast, easy to install, and supports CPU and GPU computation.
For more information see the [Darknet project website](http://pjreddie.com/darknet).
For questions or issues please use the [Google Group](https://groups.google.com/forum/#!forum/darknet).
#include "darknet.h"
#include <sys/time.h>
void demo_art(char *cfgfile, char *weightfile, int cam_index)
{
#ifdef OPENCV
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
CvCapture * cap;
cap = cvCaptureFromCAM(cam_index);
char *window = "ArtJudgementBot9000!!!";
if(!cap) error("Couldn't connect to webcam.\n");
cvNamedWindow(window, CV_WINDOW_NORMAL);
cvResizeWindow(window, 512, 512);
int i;
int idx[] = {37, 401, 434};
int n = sizeof(idx)/sizeof(idx[0]);
while(1){
image in = get_image_from_stream(cap);
image in_s = resize_image(in, net.w, net.h);
show_image(in, window);
float *p = network_predict(net, in_s.data);
printf("\033[2J");
printf("\033[1;1H");
float score = 0;
for(i = 0; i < n; ++i){
float s = p[idx[i]];
if (s > score) score = s;
}
score = score;
printf("I APPRECIATE THIS ARTWORK: %10.7f%%\n", score*100);
printf("[");
int upper = 30;
for(i = 0; i < upper; ++i){
printf("%c", ((i+.5) < score*upper) ? 219 : ' ');
}
printf("]\n");
free_image(in_s);
free_image(in);
cvWaitKey(1);
}
#endif
}
void run_art(int argc, char **argv)
{
int cam_index = find_int_arg(argc, argv, "-c", 0);
char *cfg = argv[2];
char *weights = argv[3];
demo_art(cfg, weights, cam_index);
}
#include "darknet.h"
void train_cifar(char *cfgfile, char *weightfile)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
char *backup_directory = "/home/pjreddie/backup/";
int classes = 10;
int N = 50000;
char **labels = get_labels("data/cifar/labels.txt");
int epoch = (*net.seen)/N;
data train = load_all_cifar10();
while(get_current_batch(net) < net.max_batches || net.max_batches == 0){
clock_t time=clock();
float loss = train_network_sgd(net, train, 1);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.95 + loss*.05;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net.seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net.seen);
if(*net.seen/N > epoch){
epoch = *net.seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
free_network(net);
free_ptrs((void**)labels, classes);
free(base);
free_data(train);
}
void train_cifar_distill(char *cfgfile, char *weightfile)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
char *backup_directory = "/home/pjreddie/backup/";
int classes = 10;
int N = 50000;
char **labels = get_labels("data/cifar/labels.txt");
int epoch = (*net.seen)/N;
data train = load_all_cifar10();
matrix soft = csv_to_matrix("results/ensemble.csv");
float weight = .9;
scale_matrix(soft, weight);
scale_matrix(train.y, 1. - weight);
matrix_add_matrix(soft, train.y);
while(get_current_batch(net) < net.max_batches || net.max_batches == 0){
clock_t time=clock();
float loss = train_network_sgd(net, train, 1);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.95 + loss*.05;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net.seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net.seen);
if(*net.seen/N > epoch){
epoch = *net.seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
free_network(net);
free_ptrs((void**)labels, classes);
free(base);
free_data(train);
}
void test_cifar_multi(char *filename, char *weightfile)
{
network net = parse_network_cfg(filename);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(time(0));
float avg_acc = 0;
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
int i;
for(i = 0; i < test.X.rows; ++i){
image im = float_to_image(32, 32, 3, test.X.vals[i]);
float pred[10] = {0};
float *p = network_predict(net, im.data);
axpy_cpu(10, 1, p, 1, pred, 1);
flip_image(im);
p = network_predict(net, im.data);
axpy_cpu(10, 1, p, 1, pred, 1);
int index = max_index(pred, 10);
int class = max_index(test.y.vals[i], 10);
if(index == class) avg_acc += 1;
free_image(im);
printf("%4d: %.2f%%\n", i, 100.*avg_acc/(i+1));
}
}
void test_cifar(char *filename, char *weightfile)
{
network net = parse_network_cfg(filename);
if(weightfile){
load_weights(&net, weightfile);
}
srand(time(0));
clock_t time;
float avg_acc = 0;
float avg_top5 = 0;
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
time=clock();
float *acc = network_accuracies(net, test, 2);
avg_acc += acc[0];
avg_top5 += acc[1];
printf("top1: %f, %lf seconds, %d images\n", avg_acc, sec(clock()-time), test.X.rows);
free_data(test);
}
void extract_cifar()
{
char *labels[] = {"airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"};
int i;
data train = load_all_cifar10();
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
for(i = 0; i < train.X.rows; ++i){
image im = float_to_image(32, 32, 3, train.X.vals[i]);
int class = max_index(train.y.vals[i], 10);
char buff[256];
sprintf(buff, "data/cifar/train/%d_%s",i,labels[class]);
save_image_png(im, buff);
}
for(i = 0; i < test.X.rows; ++i){
image im = float_to_image(32, 32, 3, test.X.vals[i]);
int class = max_index(test.y.vals[i], 10);
char buff[256];
sprintf(buff, "data/cifar/test/%d_%s",i,labels[class]);
save_image_png(im, buff);
}
}
void test_cifar_csv(char *filename, char *weightfile)
{
network net = parse_network_cfg(filename);
if(weightfile){
load_weights(&net, weightfile);
}
srand(time(0));
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
matrix pred = network_predict_data(net, test);
int i;
for(i = 0; i < test.X.rows; ++i){
image im = float_to_image(32, 32, 3, test.X.vals[i]);
flip_image(im);
}
matrix pred2 = network_predict_data(net, test);
scale_matrix(pred, .5);
scale_matrix(pred2, .5);
matrix_add_matrix(pred2, pred);
matrix_to_csv(pred);
fprintf(stderr, "Accuracy: %f\n", matrix_topk_accuracy(test.y, pred, 1));
free_data(test);
}
void test_cifar_csvtrain(char *filename, char *weightfile)
{
network net = parse_network_cfg(filename);
if(weightfile){
load_weights(&net, weightfile);
}
srand(time(0));
data test = load_all_cifar10();
matrix pred = network_predict_data(net, test);
int i;
for(i = 0; i < test.X.rows; ++i){
image im = float_to_image(32, 32, 3, test.X.vals[i]);
flip_image(im);
}
matrix pred2 = network_predict_data(net, test);
scale_matrix(pred, .5);
scale_matrix(pred2, .5);
matrix_add_matrix(pred2, pred);
matrix_to_csv(pred);
fprintf(stderr, "Accuracy: %f\n", matrix_topk_accuracy(test.y, pred, 1));
free_data(test);
}
void eval_cifar_csv()
{
data test = load_cifar10_data("data/cifar/cifar-10-batches-bin/test_batch.bin");
matrix pred = csv_to_matrix("results/combined.csv");
fprintf(stderr, "%d %d\n", pred.rows, pred.cols);
fprintf(stderr, "Accuracy: %f\n", matrix_topk_accuracy(test.y, pred, 1));
free_data(test);
free_matrix(pred);
}
void run_cifar(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
if(0==strcmp(argv[2], "train")) train_cifar(cfg, weights);
else if(0==strcmp(argv[2], "extract")) extract_cifar();
else if(0==strcmp(argv[2], "distill")) train_cifar_distill(cfg, weights);
else if(0==strcmp(argv[2], "test")) test_cifar(cfg, weights);
else if(0==strcmp(argv[2], "multi")) test_cifar_multi(cfg, weights);
else if(0==strcmp(argv[2], "csv")) test_cifar_csv(cfg, weights);
else if(0==strcmp(argv[2], "csvtrain")) test_cifar_csvtrain(cfg, weights);
else if(0==strcmp(argv[2], "eval")) eval_cifar_csv();
}
# Stupid python path shit.
# Instead just add darknet.py to somewhere in your python path
# OK actually that might not be a great idea, idk, work in progress
# Use at your own risk. or don't, i don't care
import sys, os
sys.path.append(os.path.join(os.getcwd(),'python/'))
import darknet as dn
net = dn.load_net("cfg/tiny-yolo.cfg", "tiny-yolo.weights", 0)
meta = dn.load_meta("cfg/coco.data")
r = dn.detect(net, meta, "data/dog.jpg")
print r
# And then down here you could detect a lot more images like:
r = dn.detect(net, meta, "data/eagle.jpg")
print r
r = dn.detect(net, meta, "data/giraffe.jpg")
print r
r = dn.detect(net, meta, "data/horses.jpg")
print r
r = dn.detect(net, meta, "data/person.jpg")
print r
#include "darknet.h"
char *dice_labels[] = {"face1","face2","face3","face4","face5","face6"};
void train_dice(char *cfgfile, char *weightfile)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
char *backup_directory = "/home/pjreddie/backup/";
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = 1024;
int i = *net.seen/imgs;
char **labels = dice_labels;
list *plist = get_paths("data/dice/dice.train.list");
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
clock_t time;
while(1){
++i;
time=clock();
data train = load_data_old(paths, imgs, plist->size, labels, 6, net.w, net.h);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %lf seconds, %ld images\n", i, loss, avg_loss, sec(clock()-time), *net.seen);
free_data(train);
if((i % 100) == 0) net.learning_rate *= .1;
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, i);
save_weights(net, buff);
}
}
}
void validate_dice(char *filename, char *weightfile)
{
network net = parse_network_cfg(filename);
if(weightfile){
load_weights(&net, weightfile);
}
srand(time(0));
char **labels = dice_labels;
list *plist = get_paths("data/dice/dice.val.list");
char **paths = (char **)list_to_array(plist);
int m = plist->size;
free_list(plist);
data val = load_data_old(paths, m, 0, labels, 6, net.w, net.h);
float *acc = network_accuracies(net, val, 2);
printf("Validation Accuracy: %f, %d images\n", acc[0], m);
free_data(val);
}
void test_dice(char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
int i = 0;
char **names = dice_labels;
char buff[256];
char *input = buff;
int indexes[6];
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, net.w, net.h);
float *X = im.data;
float *predictions = network_predict(net, X);
top_predictions(net, 6, indexes);
for(i = 0; i < 6; ++i){
int index = indexes[i];
printf("%s: %f\n", names[index], predictions[index]);
}
free_image(im);
if (filename) break;
}
}
void run_dice(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5]: 0;
if(0==strcmp(argv[2], "test")) test_dice(cfg, weights, filename);
else if(0==strcmp(argv[2], "train")) train_dice(cfg, weights);
else if(0==strcmp(argv[2], "valid")) validate_dice(cfg, weights);
}
#include "darknet.h"
#include <sys/time.h>
#include <assert.h>
void train_regressor(char *datacfg, char *cfgfile, char *weightfile, int *gpus, int ngpus, int clear)
{
int i;
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
printf("%d\n", ngpus);
network *nets = calloc(ngpus, sizeof(network));
srand(time(0));
int seed = rand();
for(i = 0; i < ngpus; ++i){
srand(seed);
#ifdef GPU
cuda_set_device(gpus[i]);
#endif
nets[i] = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&nets[i], weightfile);
}
if(clear) *nets[i].seen = 0;
nets[i].learning_rate *= ngpus;
}
srand(time(0));
network net = nets[0];
int imgs = net.batch * net.subdivisions * ngpus;
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
list *options = read_data_cfg(datacfg);
char *backup_directory = option_find_str(options, "backup", "/backup/");
char *train_list = option_find_str(options, "train", "data/train.list");
list *plist = get_paths(train_list);
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
int N = plist->size;
clock_t time;
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.threads = 32;
args.min = net.min_crop;
args.max = net.max_crop;
args.angle = net.angle;
args.aspect = net.aspect;
args.exposure = net.exposure;
args.saturation = net.saturation;
args.hue = net.hue;
args.size = net.w;
args.paths = paths;
args.n = imgs;
args.m = N;
args.type = REGRESSION_DATA;
data train;
data buffer;
pthread_t load_thread;
args.d = &buffer;
load_thread = load_data(args);
int epoch = (*net.seen)/N;
while(get_current_batch(net) < net.max_batches || net.max_batches == 0){
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = 0;
#ifdef GPU
if(ngpus == 1){
loss = train_network(net, train);
} else {
loss = train_networks(nets, ngpus, train, 4);
}
#else
loss = train_network(net, train);
#endif
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net.seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net.seen);
free_data(train);
if(*net.seen/N > epoch){
epoch = *net.seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
free_network(net);
free_ptrs((void**)paths, plist->size);
free_list(plist);
free(base);
}
void predict_regressor(char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
image sized = letterbox_image(im, net.w, net.h);
float *X = sized.data;
time=clock();
float *predictions = network_predict(net, X);
printf("Predicted: %f\n", predictions[0]);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
free_image(im);
free_image(sized);
if (filename) break;
}
}
void demo_regressor(char *datacfg, char *cfgfile, char *weightfile, int cam_index, const char *filename)
{
#ifdef OPENCV
printf("Regressor Demo\n");
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
CvCapture * cap;
if(filename){
cap = cvCaptureFromFile(filename);
}else{
cap = cvCaptureFromCAM(cam_index);
}
if(!cap) error("Couldn't connect to webcam.\n");
cvNamedWindow("Regressor", CV_WINDOW_NORMAL);
cvResizeWindow("Regressor", 512, 512);
float fps = 0;
while(1){
struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
image in = get_image_from_stream(cap);
image in_s = letterbox_image(in, net.w, net.h);
show_image(in, "Regressor");
float *predictions = network_predict(net, in_s.data);
printf("\033[2J");
printf("\033[1;1H");
printf("\nFPS:%.0f\n",fps);
printf("People: %f\n", predictions[0]);
free_image(in_s);
free_image(in);
cvWaitKey(10);
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
float curr = 1000000.f/((long int)tval_result.tv_usec);
fps = .9*fps + .1*curr;
}
#endif
}
void run_regressor(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *gpu_list = find_char_arg(argc, argv, "-gpus", 0);
int *gpus = 0;
int gpu = 0;
int ngpus = 0;
if(gpu_list){
printf("%s\n", gpu_list);
int len = strlen(gpu_list);
ngpus = 1;
int i;
for(i = 0; i < len; ++i){
if (gpu_list[i] == ',') ++ngpus;
}
gpus = calloc(ngpus, sizeof(int));
for(i = 0; i < ngpus; ++i){
gpus[i] = atoi(gpu_list);
gpu_list = strchr(gpu_list, ',')+1;
}
} else {
gpu = gpu_index;
gpus = &gpu;
ngpus = 1;
}
int cam_index = find_int_arg(argc, argv, "-c", 0);
int clear = find_arg(argc, argv, "-clear");
char *data = argv[3];
char *cfg = argv[4];
char *weights = (argc > 5) ? argv[5] : 0;
char *filename = (argc > 6) ? argv[6]: 0;
if(0==strcmp(argv[2], "test")) predict_regressor(data, cfg, weights);
else if(0==strcmp(argv[2], "train")) train_regressor(data, cfg, weights, gpus, ngpus, clear);
else if(0==strcmp(argv[2], "demo")) demo_regressor(data, cfg, weights, cam_index, filename);
}
#include "darknet.h"
#ifdef OPENCV
image get_image_from_stream(CvCapture *cap);
image ipl_to_image(IplImage* src);
void reconstruct_picture(network net, float *features, image recon, image update, float rate, float momentum, float lambda, int smooth_size, int iters);
typedef struct {
float *x;
float *y;
} float_pair;
float_pair get_rnn_vid_data(network net, char **files, int n, int batch, int steps)
{
int b;
assert(net.batch == steps + 1);
image out_im = get_network_image(net);
int output_size = out_im.w*out_im.h*out_im.c;
printf("%d %d %d\n", out_im.w, out_im.h, out_im.c);
float *feats = calloc(net.batch*batch*output_size, sizeof(float));
for(b = 0; b < batch; ++b){
int input_size = net.w*net.h*net.c;
float *input = calloc(input_size*net.batch, sizeof(float));
char *filename = files[rand()%n];
CvCapture *cap = cvCaptureFromFile(filename);
int frames = cvGetCaptureProperty(cap, CV_CAP_PROP_FRAME_COUNT);
int index = rand() % (frames - steps - 2);
if (frames < (steps + 4)){
--b;
free(input);
continue;
}
printf("frames: %d, index: %d\n", frames, index);
cvSetCaptureProperty(cap, CV_CAP_PROP_POS_FRAMES, index);
int i;
for(i = 0; i < net.batch; ++i){
IplImage* src = cvQueryFrame(cap);
image im = ipl_to_image(src);
rgbgr_image(im);
image re = resize_image(im, net.w, net.h);
//show_image(re, "loaded");
//cvWaitKey(10);
memcpy(input + i*input_size, re.data, input_size*sizeof(float));
free_image(im);
free_image(re);
}
float *output = network_predict(net, input);
free(input);
for(i = 0; i < net.batch; ++i){
memcpy(feats + (b + i*batch)*output_size, output + i*output_size, output_size*sizeof(float));
}
cvReleaseCapture(&cap);
}
//printf("%d %d %d\n", out_im.w, out_im.h, out_im.c);
float_pair p = {0};
p.x = feats;
p.y = feats + output_size*batch; //+ out_im.w*out_im.h*out_im.c;
return p;
}
void train_vid_rnn(char *cfgfile, char *weightfile)
{
char *train_videos = "data/vid/train.txt";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = net.batch*net.subdivisions;
int i = *net.seen/imgs;
list *plist = get_paths(train_videos);
int N = plist->size;
char **paths = (char **)list_to_array(plist);
clock_t time;
int steps = net.time_steps;
int batch = net.batch / net.time_steps;
network extractor = parse_network_cfg("cfg/extractor.cfg");
load_weights(&extractor, "/home/pjreddie/trained/yolo-coco.conv");
while(get_current_batch(net) < net.max_batches){
i += 1;
time=clock();
float_pair p = get_rnn_vid_data(extractor, paths, N, batch, steps);
copy_cpu(net.inputs*net.batch, p.x, 1, net.input, 1);
copy_cpu(net.truths*net.batch, p.y, 1, net.truth, 1);
float loss = train_network_datum(net) / (net.batch);
free(p.x);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
fprintf(stderr, "%d: %f, %f avg, %f rate, %lf seconds\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time));
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
if(i%10==0){
char buff[256];
sprintf(buff, "%s/%s.backup", backup_directory, base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
image save_reconstruction(network net, image *init, float *feat, char *name, int i)
{
image recon;
if (init) {
recon = copy_image(*init);
} else {
recon = make_random_image(net.w, net.h, 3);
}
image update = make_image(net.w, net.h, 3);
reconstruct_picture(net, feat, recon, update, .01, .9, .1, 2, 50);
char buff[256];
sprintf(buff, "%s%d", name, i);
save_image(recon, buff);
free_image(update);
return recon;
}
void generate_vid_rnn(char *cfgfile, char *weightfile)
{
network extractor = parse_network_cfg("cfg/extractor.recon.cfg");
load_weights(&extractor, "/home/pjreddie/trained/yolo-coco.conv");
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&extractor, 1);
set_batch_network(&net, 1);
int i;
CvCapture *cap = cvCaptureFromFile("/extra/vid/ILSVRC2015/Data/VID/snippets/val/ILSVRC2015_val_00007030.mp4");
float *feat;
float *next;
image last;
for(i = 0; i < 25; ++i){
image im = get_image_from_stream(cap);
image re = resize_image(im, extractor.w, extractor.h);
feat = network_predict(extractor, re.data);
if(i > 0){
printf("%f %f\n", mean_array(feat, 14*14*512), variance_array(feat, 14*14*512));
printf("%f %f\n", mean_array(next, 14*14*512), variance_array(next, 14*14*512));
printf("%f\n", mse_array(feat, 14*14*512));
axpy_cpu(14*14*512, -1, feat, 1, next, 1);
printf("%f\n", mse_array(next, 14*14*512));
}
next = network_predict(net, feat);
free_image(im);
free_image(save_reconstruction(extractor, 0, feat, "feat", i));
free_image(save_reconstruction(extractor, 0, next, "next", i));
if (i==24) last = copy_image(re);
free_image(re);
}
for(i = 0; i < 30; ++i){
next = network_predict(net, next);
image new = save_reconstruction(extractor, &last, next, "new", i);
free_image(last);
last = new;
}
}
void run_vid_rnn(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
//char *filename = (argc > 5) ? argv[5]: 0;
if(0==strcmp(argv[2], "train")) train_vid_rnn(cfg, weights);
else if(0==strcmp(argv[2], "generate")) generate_vid_rnn(cfg, weights);
}
#else
void run_vid_rnn(int argc, char **argv){}
#endif
#include "darknet.h"
#include <sys/time.h>
#include <assert.h>
void train_segmenter(char *datacfg, char *cfgfile, char *weightfile, int *gpus, int ngpus, int clear, int display)
{
int i;
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
printf("%d\n", ngpus);
network *nets = calloc(ngpus, sizeof(network));
srand(time(0));
int seed = rand();
for(i = 0; i < ngpus; ++i){
srand(seed);
#ifdef GPU
cuda_set_device(gpus[i]);
#endif
nets[i] = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&nets[i], weightfile);
}
if(clear) *nets[i].seen = 0;
}
srand(time(0));
network net = nets[0];
image pred = get_network_image(net);
int div = net.w/pred.w;
assert(pred.w * div == net.w);
assert(pred.h * div == net.h);
int imgs = net.batch * net.subdivisions * ngpus;
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
list *options = read_data_cfg(datacfg);
char *backup_directory = option_find_str(options, "backup", "/backup/");
char *train_list = option_find_str(options, "train", "data/train.list");
list *plist = get_paths(train_list);
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
int N = plist->size;
clock_t time;
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.threads = 32;
args.scale = div;
args.min = net.min_crop;
args.max = net.max_crop;
args.angle = net.angle;
args.aspect = net.aspect;
args.exposure = net.exposure;
args.saturation = net.saturation;
args.hue = net.hue;
args.size = net.w;
args.classes = 80;
args.paths = paths;
args.n = imgs;
args.m = N;
args.type = SEGMENTATION_DATA;
data train;
data buffer;
pthread_t load_thread;
args.d = &buffer;
load_thread = load_data(args);
int epoch = (*net.seen)/N;
while(get_current_batch(net) < net.max_batches || net.max_batches == 0){
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = 0;
#ifdef GPU
if(ngpus == 1){
loss = train_network(net, train);
} else {
loss = train_networks(nets, ngpus, train, 4);
}
#else
loss = train_network(net, train);
#endif
if(display){
image tr = float_to_image(net.w/div, net.h/div, 80, train.y.vals[net.batch*(net.subdivisions-1)]);
image im = float_to_image(net.w, net.h, net.c, train.X.vals[net.batch*(net.subdivisions-1)]);
image mask = mask_to_rgb(tr);
image prmask = mask_to_rgb(pred);
show_image(im, "input");
show_image(prmask, "pred");
show_image(mask, "truth");
#ifdef OPENCV
cvWaitKey(100);
#endif
free_image(mask);
free_image(prmask);
}
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net.seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net.seen);
free_data(train);
if(*net.seen/N > epoch){
epoch = *net.seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
free_network(net);
free_ptrs((void**)paths, plist->size);
free_list(plist);
free(base);
}
void predict_segmenter(char *datafile, char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
image sized = letterbox_image(im, net.w, net.h);
float *X = sized.data;
time=clock();
float *predictions = network_predict(net, X);
image pred = get_network_image(net);
image prmask = mask_to_rgb(pred);
show_image(sized, "orig");
show_image(prmask, "pred");
#ifdef OPENCV
cvWaitKey(0);
#endif
printf("Predicted: %f\n", predictions[0]);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
free_image(im);
free_image(sized);
free_image(prmask);
if (filename) break;
}
}
void demo_segmenter(char *datacfg, char *cfgfile, char *weightfile, int cam_index, const char *filename)
{
#ifdef OPENCV
printf("Classifier Demo\n");
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
CvCapture * cap;
if(filename){
cap = cvCaptureFromFile(filename);
}else{
cap = cvCaptureFromCAM(cam_index);
}
if(!cap) error("Couldn't connect to webcam.\n");
cvNamedWindow("Segmenter", CV_WINDOW_NORMAL);
cvResizeWindow("Segmenter", 512, 512);
float fps = 0;
while(1){
struct timeval tval_before, tval_after, tval_result;
gettimeofday(&tval_before, NULL);
image in = get_image_from_stream(cap);
image in_s = letterbox_image(in, net.w, net.h);
float *predictions = network_predict(net, in_s.data);
printf("\033[2J");
printf("\033[1;1H");
printf("\nFPS:%.0f\n",fps);
image pred = get_network_image(net);
image prmask = mask_to_rgb(pred);
show_image(prmask, "Segmenter");
free_image(in_s);
free_image(in);
free_image(prmask);
cvWaitKey(10);
gettimeofday(&tval_after, NULL);
timersub(&tval_after, &tval_before, &tval_result);
float curr = 1000000.f/((long int)tval_result.tv_usec);
fps = .9*fps + .1*curr;
}
#endif
}
void run_segmenter(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *gpu_list = find_char_arg(argc, argv, "-gpus", 0);
int *gpus = 0;
int gpu = 0;
int ngpus = 0;
if(gpu_list){
printf("%s\n", gpu_list);
int len = strlen(gpu_list);
ngpus = 1;
int i;
for(i = 0; i < len; ++i){
if (gpu_list[i] == ',') ++ngpus;
}
gpus = calloc(ngpus, sizeof(int));
for(i = 0; i < ngpus; ++i){
gpus[i] = atoi(gpu_list);
gpu_list = strchr(gpu_list, ',')+1;
}
} else {
gpu = gpu_index;
gpus = &gpu;
ngpus = 1;
}
int cam_index = find_int_arg(argc, argv, "-c", 0);
int clear = find_arg(argc, argv, "-clear");
int display = find_arg(argc, argv, "-display");
char *data = argv[3];
char *cfg = argv[4];
char *weights = (argc > 5) ? argv[5] : 0;
char *filename = (argc > 6) ? argv[6]: 0;
if(0==strcmp(argv[2], "test")) predict_segmenter(data, cfg, weights, filename);
else if(0==strcmp(argv[2], "train")) train_segmenter(data, cfg, weights, gpus, ngpus, clear, display);
else if(0==strcmp(argv[2], "demo")) demo_segmenter(data, cfg, weights, cam_index, filename);
}
#include "darknet.h"
void train_super(char *cfgfile, char *weightfile, int clear)
{
char *train_images = "/data/imagenet/imagenet1k.train.list";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
if(clear) *net.seen = 0;
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = net.batch*net.subdivisions;
int i = *net.seen/imgs;
data train, buffer;
list *plist = get_paths(train_images);
//int N = plist->size;
char **paths = (char **)list_to_array(plist);
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.scale = 4;
args.paths = paths;
args.n = imgs;
args.m = plist->size;
args.d = &buffer;
args.type = SUPER_DATA;
pthread_t load_thread = load_data_in_thread(args);
clock_t time;
//while(i*imgs < N*120){
while(get_current_batch(net) < net.max_batches){
i += 1;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %f rate, %lf seconds, %d images\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time), i*imgs);
if(i%1000==0){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s.backup", backup_directory, base);
save_weights(net, buff);
}
free_data(train);
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
void test_super(char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
resize_network(&net, im.w, im.h);
printf("%d %d\n", im.w, im.h);
float *X = im.data;
time=clock();
network_predict(net, X);
image out = get_network_image(net);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
save_image(out, "out");
free_image(im);
if (filename) break;
}
}
void run_super(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5] : 0;
int clear = find_arg(argc, argv, "-clear");
if(0==strcmp(argv[2], "train")) train_super(cfg, weights, clear);
else if(0==strcmp(argv[2], "test")) test_super(cfg, weights, filename);
/*
else if(0==strcmp(argv[2], "valid")) validate_super(cfg, weights);
*/
}
#include "darknet.h"
#include <sys/time.h>
void train_swag(char *cfgfile, char *weightfile)
{
char *train_images = "data/voc.0712.trainval";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = net.batch*net.subdivisions;
int i = *net.seen/imgs;
data train, buffer;
layer l = net.layers[net.n - 1];
int side = l.side;
int classes = l.classes;
float jitter = l.jitter;
list *plist = get_paths(train_images);
//int N = plist->size;
char **paths = (char **)list_to_array(plist);
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.paths = paths;
args.n = imgs;
args.m = plist->size;
args.classes = classes;
args.jitter = jitter;
args.num_boxes = side;
args.d = &buffer;
args.type = REGION_DATA;
pthread_t load_thread = load_data_in_thread(args);
clock_t time;
//while(i*imgs < N*120){
while(get_current_batch(net) < net.max_batches){
i += 1;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %f rate, %lf seconds, %d images\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time), i*imgs);
if(i%1000==0 || i == 600){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
free_data(train);
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
void run_swag(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
if(0==strcmp(argv[2], "train")) train_swag(cfg, weights);
}
#include "darknet.h"
void train_tag(char *cfgfile, char *weightfile, int clear)
{
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
char *backup_directory = "/home/pjreddie/backup/";
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
if(clear) *net.seen = 0;
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = 1024;
list *plist = get_paths("/home/pjreddie/tag/train.list");
char **paths = (char **)list_to_array(plist);
printf("%d\n", plist->size);
int N = plist->size;
clock_t time;
pthread_t load_thread;
data train;
data buffer;
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.min = net.w;
args.max = net.max_crop;
args.size = net.w;
args.paths = paths;
args.classes = net.outputs;
args.n = imgs;
args.m = N;
args.d = &buffer;
args.type = TAG_DATA;
args.angle = net.angle;
args.exposure = net.exposure;
args.saturation = net.saturation;
args.hue = net.hue;
fprintf(stderr, "%d classes\n", net.outputs);
load_thread = load_data_in_thread(args);
int epoch = (*net.seen)/N;
while(get_current_batch(net) < net.max_batches || net.max_batches == 0){
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net.seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net.seen);
free_data(train);
if(*net.seen/N > epoch){
epoch = *net.seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s.backup",backup_directory,base);
save_weights(net, buff);
}
}
char buff[256];
sprintf(buff, "%s/%s.weights", backup_directory, base);
save_weights(net, buff);
pthread_join(load_thread, 0);
free_data(buffer);
free_network(net);
free_ptrs((void**)paths, plist->size);
free_list(plist);
free(base);
}
void test_tag(char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
int i = 0;
char **names = get_labels("data/tags.txt");
clock_t time;
int indexes[10];
char buff[256];
char *input = buff;
int size = net.w;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
image r = resize_min(im, size);
resize_network(&net, r.w, r.h);
printf("%d %d\n", r.w, r.h);
float *X = r.data;
time=clock();
float *predictions = network_predict(net, X);
top_predictions(net, 10, indexes);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
for(i = 0; i < 10; ++i){
int index = indexes[i];
printf("%.1f%%: %s\n", predictions[index]*100, names[index]);
}
if(r.data != im.data) free_image(r);
free_image(im);
if (filename) break;
}
}
void run_tag(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
int clear = find_arg(argc, argv, "-clear");
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5] : 0;
if(0==strcmp(argv[2], "train")) train_tag(cfg, weights, clear);
else if(0==strcmp(argv[2], "test")) test_tag(cfg, weights, filename);
}
#include "darknet.h"
void extract_voxel(char *lfile, char *rfile, char *prefix)
{
#ifdef OPENCV
int w = 1920;
int h = 1080;
int shift = 0;
int count = 0;
CvCapture *lcap = cvCaptureFromFile(lfile);
CvCapture *rcap = cvCaptureFromFile(rfile);
while(1){
image l = get_image_from_stream(lcap);
image r = get_image_from_stream(rcap);
if(!l.w || !r.w) break;
if(count%100 == 0) {
shift = best_3d_shift_r(l, r, -l.h/100, l.h/100);
printf("%d\n", shift);
}
image ls = crop_image(l, (l.w - w)/2, (l.h - h)/2, w, h);
image rs = crop_image(r, 105 + (r.w - w)/2, (r.h - h)/2 + shift, w, h);
char buff[256];
sprintf(buff, "%s_%05d_l", prefix, count);
save_image(ls, buff);
sprintf(buff, "%s_%05d_r", prefix, count);
save_image(rs, buff);
free_image(l);
free_image(r);
free_image(ls);
free_image(rs);
++count;
}
#else
printf("need OpenCV for extraction\n");
#endif
}
void train_voxel(char *cfgfile, char *weightfile)
{
char *train_images = "/data/imagenet/imagenet1k.train.list";
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
char *base = basecfg(cfgfile);
printf("%s\n", base);
float avg_loss = -1;
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = net.batch*net.subdivisions;
int i = *net.seen/imgs;
data train, buffer;
list *plist = get_paths(train_images);
//int N = plist->size;
char **paths = (char **)list_to_array(plist);
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.scale = 4;
args.paths = paths;
args.n = imgs;
args.m = plist->size;
args.d = &buffer;
args.type = SUPER_DATA;
pthread_t load_thread = load_data_in_thread(args);
clock_t time;
//while(i*imgs < N*120){
while(get_current_batch(net) < net.max_batches){
i += 1;
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded: %lf seconds\n", sec(clock()-time));
time=clock();
float loss = train_network(net, train);
if (avg_loss < 0) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%d: %f, %f avg, %f rate, %lf seconds, %d images\n", i, loss, avg_loss, get_current_rate(net), sec(clock()-time), i*imgs);
if(i%1000==0){
char buff[256];
sprintf(buff, "%s/%s_%d.weights", backup_directory, base, i);
save_weights(net, buff);
}
if(i%100==0){
char buff[256];
sprintf(buff, "%s/%s.backup", backup_directory, base);
save_weights(net, buff);
}
free_data(train);
}
char buff[256];
sprintf(buff, "%s/%s_final.weights", backup_directory, base);
save_weights(net, buff);
}
void test_voxel(char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
resize_network(&net, im.w, im.h);
printf("%d %d\n", im.w, im.h);
float *X = im.data;
time=clock();
network_predict(net, X);
image out = get_network_image(net);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
save_image(out, "out");
free_image(im);
if (filename) break;
}
}
void run_voxel(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5] : 0;
if(0==strcmp(argv[2], "train")) train_voxel(cfg, weights);
else if(0==strcmp(argv[2], "test")) test_voxel(cfg, weights, filename);
else if(0==strcmp(argv[2], "extract")) extract_voxel(argv[3], argv[4], argv[5]);
/*
else if(0==strcmp(argv[2], "valid")) validate_voxel(cfg, weights);
*/
}
#include "darknet.h"
void train_writing(char *cfgfile, char *weightfile)
{
char *backup_directory = "/home/pjreddie/backup/";
srand(time(0));
float avg_loss = -1;
char *base = basecfg(cfgfile);
printf("%s\n", base);
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
printf("Learning Rate: %g, Momentum: %g, Decay: %g\n", net.learning_rate, net.momentum, net.decay);
int imgs = net.batch*net.subdivisions;
list *plist = get_paths("figures.list");
char **paths = (char **)list_to_array(plist);
clock_t time;
int N = plist->size;
printf("N: %d\n", N);
image out = get_network_image(net);
data train, buffer;
load_args args = {0};
args.w = net.w;
args.h = net.h;
args.out_w = out.w;
args.out_h = out.h;
args.paths = paths;
args.n = imgs;
args.m = N;
args.d = &buffer;
args.type = WRITING_DATA;
pthread_t load_thread = load_data_in_thread(args);
int epoch = (*net.seen)/N;
while(get_current_batch(net) < net.max_batches || net.max_batches == 0){
time=clock();
pthread_join(load_thread, 0);
train = buffer;
load_thread = load_data_in_thread(args);
printf("Loaded %lf seconds\n",sec(clock()-time));
time=clock();
float loss = train_network(net, train);
/*
image pred = float_to_image(64, 64, 1, out);
print_image(pred);
*/
/*
image im = float_to_image(256, 256, 3, train.X.vals[0]);
image lab = float_to_image(64, 64, 1, train.y.vals[0]);
image pred = float_to_image(64, 64, 1, out);
show_image(im, "image");
show_image(lab, "label");
print_image(lab);
show_image(pred, "pred");
cvWaitKey(0);
*/
if(avg_loss == -1) avg_loss = loss;
avg_loss = avg_loss*.9 + loss*.1;
printf("%ld, %.3f: %f, %f avg, %f rate, %lf seconds, %ld images\n", get_current_batch(net), (float)(*net.seen)/N, loss, avg_loss, get_current_rate(net), sec(clock()-time), *net.seen);
free_data(train);
if(get_current_batch(net)%100 == 0){
char buff[256];
sprintf(buff, "%s/%s_batch_%ld.weights", backup_directory, base, get_current_batch(net));
save_weights(net, buff);
}
if(*net.seen/N > epoch){
epoch = *net.seen/N;
char buff[256];
sprintf(buff, "%s/%s_%d.weights",backup_directory,base, epoch);
save_weights(net, buff);
}
}
}
void test_writing(char *cfgfile, char *weightfile, char *filename)
{
network net = parse_network_cfg(cfgfile);
if(weightfile){
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
srand(2222222);
clock_t time;
char buff[256];
char *input = buff;
while(1){
if(filename){
strncpy(input, filename, 256);
}else{
printf("Enter Image Path: ");
fflush(stdout);
input = fgets(input, 256, stdin);
if(!input) return;
strtok(input, "\n");
}
image im = load_image_color(input, 0, 0);
resize_network(&net, im.w, im.h);
printf("%d %d %d\n", im.h, im.w, im.c);
float *X = im.data;
time=clock();
network_predict(net, X);
printf("%s: Predicted in %f seconds.\n", input, sec(clock()-time));
image pred = get_network_image(net);
image upsampled = resize_image(pred, im.w, im.h);
image thresh = threshold_image(upsampled, .5);
pred = thresh;
show_image(pred, "prediction");
show_image(im, "orig");
#ifdef OPENCV
cvWaitKey(0);
cvDestroyAllWindows();
#endif
free_image(upsampled);
free_image(thresh);
free_image(im);
if (filename) break;
}
}
void run_writing(int argc, char **argv)
{
if(argc < 4){
fprintf(stderr, "usage: %s %s [train/test/valid] [cfg] [weights (optional)]\n", argv[0], argv[1]);
return;
}
char *cfg = argv[3];
char *weights = (argc > 4) ? argv[4] : 0;
char *filename = (argc > 5) ? argv[5] : 0;
if(0==strcmp(argv[2], "train")) train_writing(cfg, weights);
else if(0==strcmp(argv[2], "test")) test_writing(cfg, weights, filename);
}
from ctypes import *
import math
import random
def sample(probs):
s = sum(probs)
probs = [a/s for a in probs]
r = random.uniform(0, 1)
for i in range(len(probs)):
r = r - probs[i]
if r <= 0:
return i
return len(probs)-1
def c_array(ctype, values):
return (ctype * len(values))(*values)
class BOX(Structure):
_fields_ = [("x", c_float),
("y", c_float),
("w", c_float),
("h", c_float)]
class IMAGE(Structure):
_fields_ = [("w", c_int),
("h", c_int),
("c", c_int),
("data", POINTER(c_float))]
class METADATA(Structure):
_fields_ = [("classes", c_int),
("names", POINTER(c_char_p))]
#lib = CDLL("/home/pjreddie/documents/darknet/libdarknet.so", RTLD_GLOBAL)
lib = CDLL("libdarknet.so", RTLD_GLOBAL)
lib.network_width.argtypes = [c_void_p]
lib.network_width.restype = c_int
lib.network_height.argtypes = [c_void_p]
lib.network_height.restype = c_int
predict = lib.network_predict_p
predict.argtypes = [c_void_p, POINTER(c_float)]
predict.restype = POINTER(c_float)
make_boxes = lib.make_boxes
make_boxes.argtypes = [c_void_p]
make_boxes.restype = POINTER(BOX)
free_ptrs = lib.free_ptrs
free_ptrs.argtypes = [POINTER(c_void_p), c_int]
num_boxes = lib.num_boxes
num_boxes.argtypes = [c_void_p]
num_boxes.restype = c_int
make_probs = lib.make_probs
make_probs.argtypes = [c_void_p]
make_probs.restype = POINTER(POINTER(c_float))
detect = lib.network_predict_p
detect.argtypes = [c_void_p, IMAGE, c_float, c_float, c_float, POINTER(BOX), POINTER(POINTER(c_float))]
reset_rnn = lib.reset_rnn
reset_rnn.argtypes = [c_void_p]
load_net = lib.load_network_p
load_net.argtypes = [c_char_p, c_char_p, c_int]
load_net.restype = c_void_p
free_image = lib.free_image
free_image.argtypes = [IMAGE]
letterbox_image = lib.letterbox_image
letterbox_image.argtypes = [IMAGE, c_int, c_int]
letterbox_image.restype = IMAGE
load_meta = lib.get_metadata
lib.get_metadata.argtypes = [c_char_p]
lib.get_metadata.restype = METADATA
load_image = lib.load_image_color
load_image.argtypes = [c_char_p, c_int, c_int]
load_image.restype = IMAGE
predict_image = lib.network_predict_image
predict_image.argtypes = [c_void_p, IMAGE]
predict_image.restype = POINTER(c_float)
network_detect = lib.network_detect
network_detect.argtypes = [c_void_p, IMAGE, c_float, c_float, c_float, POINTER(BOX), POINTER(POINTER(c_float))]
def classify(net, meta, im):
out = predict_image(net, im)
res = []
for i in range(meta.classes):
res.append((meta.names[i], out[i]))
res = sorted(res, key=lambda x: -x[1])
return res
def detect(net, meta, image, thresh=.5, hier_thresh=.5, nms=.45):
im = load_image(image, 0, 0)
boxes = make_boxes(net)
probs = make_probs(net)
num = num_boxes(net)
network_detect(net, im, thresh, hier_thresh, nms, boxes, probs)
res = []
for j in range(num):
for i in range(meta.classes):
if probs[j][i] > 0:
res.append((meta.names[i], probs[j][i], (boxes[j].x, boxes[j].y, boxes[j].w, boxes[j].h)))
res = sorted(res, key=lambda x: -x[1])
free_image(im)
free_ptrs(cast(probs, POINTER(c_void_p)), num)
return res
if __name__ == "__main__":
#net = load_net("cfg/densenet201.cfg", "/home/pjreddie/trained/densenet201.weights", 0)
#im = load_image("data/wolf.jpg", 0, 0)
#meta = load_meta("cfg/imagenet1k.data")
#r = classify(net, meta, im)
#print r[:10]
net = load_net("cfg/tiny-yolo.cfg", "tiny-yolo.weights", 0)
meta = load_meta("cfg/coco.data")
r = detect(net, meta, "data/dog.jpg")
print r
from darknet import *
def predict_tactic(net, s):
prob = 0
d = c_array(c_float, [0.0]*256)
tac = ''
if not len(s):
s = '\n'
for c in s[:-1]:
d[ord(c)] = 1
pred = predict(net, d)
d[ord(c)] = 0
c = s[-1]
while 1:
d[ord(c)] = 1
pred = predict(net, d)
d[ord(c)] = 0
pred = [pred[i] for i in range(256)]
ind = sample(pred)
c = chr(ind)
prob += math.log(pred[ind])
if len(tac) and tac[-1] == '.':
break
tac = tac + c
return (tac, prob)
def predict_tactics(net, s, n):
tacs = []
for i in range(n):
reset_rnn(net)
tacs.append(predict_tactic(net, s))
tacs = sorted(tacs, key=lambda x: -x[1])
return tacs
net = load_net("cfg/coq.test.cfg", "/home/pjreddie/backup/coq.backup", 0)
t = predict_tactics(net, "+++++\n", 10)
print t
mkdir -p images
mkdir -p images/orig
mkdir -p images/train
mkdir -p images/val
ffmpeg -i Face1.mp4 images/orig/face1_%6d.jpg
ffmpeg -i Face2.mp4 images/orig/face2_%6d.jpg
ffmpeg -i Face3.mp4 images/orig/face3_%6d.jpg
ffmpeg -i Face4.mp4 images/orig/face4_%6d.jpg
ffmpeg -i Face5.mp4 images/orig/face5_%6d.jpg
ffmpeg -i Face6.mp4 images/orig/face6_%6d.jpg
mogrify -resize 100x100^ -gravity center -crop 100x100+0+0 +repage images/orig/*
ls images/orig/* | shuf | head -n 1000 | xargs mv -t images/val
mv images/orig/* images/train
find `pwd`/images/train > dice.train.list -name \*.jpg
find `pwd`/images/val > dice.val.list -name \*.jpg
#!/bin/bash
# Usage:
# wget http://pjreddie.com/media/files/peek.weights
# scripts/gen_tactic.sh < data/goal.txt
./darknet rnn generatetactic cfg/gru.cfg peek.weights 2>/dev/null
#!/bin/bash
# Clone COCO API
git clone https://github.com/pdollar/coco
cd coco
mkdir images
cd images
# Download Images
wget -c https://pjreddie.com/media/files/train2014.zip
wget -c https://pjreddie.com/media/files/val2014.zip
# Unzip
unzip -q train2014.zip
unzip -q val2014.zip
cd ..
# Download COCO Metadata
wget -c https://pjreddie.com/media/files/instances_train-val2014.zip
wget -c https://pjreddie.com/media/files/coco/5k.part
wget -c https://pjreddie.com/media/files/coco/trainvalno5k.part
wget -c https://pjreddie.com/media/files/coco/labels.tgz
tar xzf labels.tgz
unzip -q instances_train-val2014.zip
# Set Up Image Lists
paste <(awk "{print \"$PWD\"}" <5k.part) 5k.part | tr -d '\t' > 5k.txt
paste <(awk "{print \"$PWD\"}" <trainvalno5k.part) trainvalno5k.part | tr -d '\t' > trainvalno5k.txt
#!/bin/bash
mkdir -p labelled
wd=`pwd`
for f in val/*.xml;
do
label=`grep -m1 "<name>" $f | grep -oP '<name>\K[^<]*'`
im=`echo $f | sed 's/val/imgs/; s/xml/JPEG/'`
out=`echo $im | sed 's/JPEG/'${label}'.JPEG/; s/imgs/labelled/'`
ln -s ${wd}/$im ${wd}/$out
done
find ${wd}/labelled -name \*.JPEG > inet.val.list
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join
sets=[('2012', 'train'), ('2012', 'val'), ('2007', 'train'), ('2007', 'val'), ('2007', 'test')]
classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]
def convert(size, box):
dw = 1./(size[0])
dh = 1./(size[1])
x = (box[0] + box[1])/2.0 - 1
y = (box[2] + box[3])/2.0 - 1
w = box[1] - box[0]
h = box[3] - box[2]
x = x*dw
w = w*dw
y = y*dh
h = h*dh
return (x,y,w,h)
def convert_annotation(year, image_id):
in_file = open('VOCdevkit/VOC%s/Annotations/%s.xml'%(year, image_id))
out_file = open('VOCdevkit/VOC%s/labels/%s.txt'%(year, image_id), 'w')
tree=ET.parse(in_file)
root = tree.getroot()
size = root.find('size')
w = int(size.find('width').text)
h = int(size.find('height').text)
for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult)==1:
continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text))
bb = convert((w,h), b)
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
wd = getcwd()
for year, image_set in sets:
if not os.path.exists('VOCdevkit/VOC%s/labels/'%(year)):
os.makedirs('VOCdevkit/VOC%s/labels/'%(year))
image_ids = open('VOCdevkit/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
list_file = open('%s_%s.txt'%(year, image_set), 'w')
for image_id in image_ids:
list_file.write('%s/VOCdevkit/VOC%s/JPEGImages/%s.jpg\n'%(wd, year, image_id))
convert_annotation(year, image_id)
list_file.close()
os.system("cat 2007_train.txt 2007_val.txt 2012_train.txt 2012_val.txt > train.txt")
os.system("cat 2007_train.txt 2007_val.txt 2007_test.txt 2012_train.txt 2012_val.txt > train.all.txt")
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
extern "C" {
#include "activations.h"
#include "cuda.h"
}
__device__ float lhtan_activate_kernel(float x)
{
if(x < 0) return .001*x;
if(x > 1) return .001*(x-1) + 1;
return x;
}
__device__ float lhtan_gradient_kernel(float x)
{
if(x > 0 && x < 1) return 1;
return .001;
}
__device__ float hardtan_activate_kernel(float x)
{
if (x < -1) return -1;
if (x > 1) return 1;
return x;
}
__device__ float linear_activate_kernel(float x){return x;}
__device__ float logistic_activate_kernel(float x){return 1./(1. + exp(-x));}
__device__ float loggy_activate_kernel(float x){return 2./(1. + exp(-x)) - 1;}
__device__ float relu_activate_kernel(float x){return x*(x>0);}
__device__ float elu_activate_kernel(float x){return (x >= 0)*x + (x < 0)*(exp(x)-1);}
__device__ float relie_activate_kernel(float x){return (x>0) ? x : .01*x;}
__device__ float ramp_activate_kernel(float x){return x*(x>0)+.1*x;}
__device__ float leaky_activate_kernel(float x){return (x>0) ? x : .1*x;}
__device__ float tanh_activate_kernel(float x){return (2/(1 + exp(-2*x)) - 1);}
__device__ float plse_activate_kernel(float x)
{
if(x < -4) return .01 * (x + 4);
if(x > 4) return .01 * (x - 4) + 1;
return .125*x + .5;
}
__device__ float stair_activate_kernel(float x)
{
int n = floor(x);
if (n%2 == 0) return floor(x/2.);
else return (x - n) + floor(x/2.);
}
__device__ float hardtan_gradient_kernel(float x)
{
if (x > -1 && x < 1) return 1;
return 0;
}
__device__ float linear_gradient_kernel(float x){return 1;}
__device__ float logistic_gradient_kernel(float x){return (1-x)*x;}
__device__ float loggy_gradient_kernel(float x)
{
float y = (x+1.)/2.;
return 2*(1-y)*y;
}
__device__ float relu_gradient_kernel(float x){return (x>0);}
__device__ float elu_gradient_kernel(float x){return (x >= 0) + (x < 0)*(x + 1);}
__device__ float relie_gradient_kernel(float x){return (x>0) ? 1 : .01;}
__device__ float ramp_gradient_kernel(float x){return (x>0)+.1;}
__device__ float leaky_gradient_kernel(float x){return (x>0) ? 1 : .1;}
__device__ float tanh_gradient_kernel(float x){return 1-x*x;}
__device__ float plse_gradient_kernel(float x){return (x < 0 || x > 1) ? .01 : .125;}
__device__ float stair_gradient_kernel(float x)
{
if (floor(x) == x) return 0;
return 1;
}
__device__ float activate_kernel(float x, ACTIVATION a)
{
switch(a){
case LINEAR:
return linear_activate_kernel(x);
case LOGISTIC:
return logistic_activate_kernel(x);
case LOGGY:
return loggy_activate_kernel(x);
case RELU:
return relu_activate_kernel(x);
case ELU:
return elu_activate_kernel(x);
case RELIE:
return relie_activate_kernel(x);
case RAMP:
return ramp_activate_kernel(x);
case LEAKY:
return leaky_activate_kernel(x);
case TANH:
return tanh_activate_kernel(x);
case PLSE:
return plse_activate_kernel(x);
case STAIR:
return stair_activate_kernel(x);
case HARDTAN:
return hardtan_activate_kernel(x);
case LHTAN:
return lhtan_activate_kernel(x);
}
return 0;
}
__device__ float gradient_kernel(float x, ACTIVATION a)
{
switch(a){
case LINEAR:
return linear_gradient_kernel(x);
case LOGISTIC:
return logistic_gradient_kernel(x);
case LOGGY:
return loggy_gradient_kernel(x);
case RELU:
return relu_gradient_kernel(x);
case ELU:
return elu_gradient_kernel(x);
case RELIE:
return relie_gradient_kernel(x);
case RAMP:
return ramp_gradient_kernel(x);
case LEAKY:
return leaky_gradient_kernel(x);
case TANH:
return tanh_gradient_kernel(x);
case PLSE:
return plse_gradient_kernel(x);
case STAIR:
return stair_gradient_kernel(x);
case HARDTAN:
return hardtan_gradient_kernel(x);
case LHTAN:
return lhtan_gradient_kernel(x);
}
return 0;
}
__global__ void activate_array_kernel(float *x, int n, ACTIVATION a)
{
int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(i < n) x[i] = activate_kernel(x[i], a);
}
__global__ void gradient_array_kernel(float *x, int n, ACTIVATION a, float *delta)
{
int i = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(i < n) delta[i] *= gradient_kernel(x[i], a);
}
extern "C" void activate_array_gpu(float *x, int n, ACTIVATION a)
{
activate_array_kernel<<<cuda_gridsize(n), BLOCK>>>(x, n, a);
check_error(cudaPeekAtLastError());
}
extern "C" void gradient_array_gpu(float *x, int n, ACTIVATION a, float *delta)
{
gradient_array_kernel<<<cuda_gridsize(n), BLOCK>>>(x, n, a, delta);
check_error(cudaPeekAtLastError());
}
#include "activation_layer.h"
#include "utils.h"
#include "cuda.h"
#include "blas.h"
#include "gemm.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
layer make_activation_layer(int batch, int inputs, ACTIVATION activation)
{
layer l = {0};
l.type = ACTIVE;
l.inputs = inputs;
l.outputs = inputs;
l.batch=batch;
l.output = calloc(batch*inputs, sizeof(float*));
l.delta = calloc(batch*inputs, sizeof(float*));
l.forward = forward_activation_layer;
l.backward = backward_activation_layer;
#ifdef GPU
l.forward_gpu = forward_activation_layer_gpu;
l.backward_gpu = backward_activation_layer_gpu;
l.output_gpu = cuda_make_array(l.output, inputs*batch);
l.delta_gpu = cuda_make_array(l.delta, inputs*batch);
#endif
l.activation = activation;
fprintf(stderr, "Activation Layer: %d inputs\n", inputs);
return l;
}
void forward_activation_layer(layer l, network net)
{
copy_cpu(l.outputs*l.batch, net.input, 1, l.output, 1);
activate_array(l.output, l.outputs*l.batch, l.activation);
}
void backward_activation_layer(layer l, network net)
{
gradient_array(l.output, l.outputs*l.batch, l.activation, l.delta);
copy_cpu(l.outputs*l.batch, l.delta, 1, net.delta, 1);
}
#ifdef GPU
void forward_activation_layer_gpu(layer l, network net)
{
copy_gpu(l.outputs*l.batch, net.input_gpu, 1, l.output_gpu, 1);
activate_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation);
}
void backward_activation_layer_gpu(layer l, network net)
{
gradient_array_gpu(l.output_gpu, l.outputs*l.batch, l.activation, l.delta_gpu);
copy_gpu(l.outputs*l.batch, l.delta_gpu, 1, net.delta_gpu, 1);
}
#endif
#ifndef ACTIVATION_LAYER_H
#define ACTIVATION_LAYER_H
#include "activations.h"
#include "layer.h"
#include "network.h"
layer make_activation_layer(int batch, int inputs, ACTIVATION activation);
void forward_activation_layer(layer l, network net);
void backward_activation_layer(layer l, network net);
#ifdef GPU
void forward_activation_layer_gpu(layer l, network net);
void backward_activation_layer_gpu(layer l, network net);
#endif
#endif
#include "activations.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char *get_activation_string(ACTIVATION a)
{
switch(a){
case LOGISTIC:
return "logistic";
case LOGGY:
return "loggy";
case RELU:
return "relu";
case ELU:
return "elu";
case RELIE:
return "relie";
case RAMP:
return "ramp";
case LINEAR:
return "linear";
case TANH:
return "tanh";
case PLSE:
return "plse";
case LEAKY:
return "leaky";
case STAIR:
return "stair";
case HARDTAN:
return "hardtan";
case LHTAN:
return "lhtan";
default:
break;
}
return "relu";
}
ACTIVATION get_activation(char *s)
{
if (strcmp(s, "logistic")==0) return LOGISTIC;
if (strcmp(s, "loggy")==0) return LOGGY;
if (strcmp(s, "relu")==0) return RELU;
if (strcmp(s, "elu")==0) return ELU;
if (strcmp(s, "relie")==0) return RELIE;
if (strcmp(s, "plse")==0) return PLSE;
if (strcmp(s, "hardtan")==0) return HARDTAN;
if (strcmp(s, "lhtan")==0) return LHTAN;
if (strcmp(s, "linear")==0) return LINEAR;
if (strcmp(s, "ramp")==0) return RAMP;
if (strcmp(s, "leaky")==0) return LEAKY;
if (strcmp(s, "tanh")==0) return TANH;
if (strcmp(s, "stair")==0) return STAIR;
fprintf(stderr, "Couldn't find activation function %s, going with ReLU\n", s);
return RELU;
}
float activate(float x, ACTIVATION a)
{
switch(a){
case LINEAR:
return linear_activate(x);
case LOGISTIC:
return logistic_activate(x);
case LOGGY:
return loggy_activate(x);
case RELU:
return relu_activate(x);
case ELU:
return elu_activate(x);
case RELIE:
return relie_activate(x);
case RAMP:
return ramp_activate(x);
case LEAKY:
return leaky_activate(x);
case TANH:
return tanh_activate(x);
case PLSE:
return plse_activate(x);
case STAIR:
return stair_activate(x);
case HARDTAN:
return hardtan_activate(x);
case LHTAN:
return lhtan_activate(x);
}
return 0;
}
void activate_array(float *x, const int n, const ACTIVATION a)
{
int i;
for(i = 0; i < n; ++i){
x[i] = activate(x[i], a);
}
}
float gradient(float x, ACTIVATION a)
{
switch(a){
case LINEAR:
return linear_gradient(x);
case LOGISTIC:
return logistic_gradient(x);
case LOGGY:
return loggy_gradient(x);
case RELU:
return relu_gradient(x);
case ELU:
return elu_gradient(x);
case RELIE:
return relie_gradient(x);
case RAMP:
return ramp_gradient(x);
case LEAKY:
return leaky_gradient(x);
case TANH:
return tanh_gradient(x);
case PLSE:
return plse_gradient(x);
case STAIR:
return stair_gradient(x);
case HARDTAN:
return hardtan_gradient(x);
case LHTAN:
return lhtan_gradient(x);
}
return 0;
}
void gradient_array(const float *x, const int n, const ACTIVATION a, float *delta)
{
int i;
for(i = 0; i < n; ++i){
delta[i] *= gradient(x[i], a);
}
}
#ifndef ACTIVATIONS_H
#define ACTIVATIONS_H
#include "darknet.h"
#include "cuda.h"
#include "math.h"
ACTIVATION get_activation(char *s);
char *get_activation_string(ACTIVATION a);
float activate(float x, ACTIVATION a);
float gradient(float x, ACTIVATION a);
void gradient_array(const float *x, const int n, const ACTIVATION a, float *delta);
void activate_array(float *x, const int n, const ACTIVATION a);
#ifdef GPU
void activate_array_gpu(float *x, int n, ACTIVATION a);
void gradient_array_gpu(float *x, int n, ACTIVATION a, float *delta);
#endif
static inline float stair_activate(float x)
{
int n = floor(x);
if (n%2 == 0) return floor(x/2.);
else return (x - n) + floor(x/2.);
}
static inline float hardtan_activate(float x)
{
if (x < -1) return -1;
if (x > 1) return 1;
return x;
}
static inline float linear_activate(float x){return x;}
static inline float logistic_activate(float x){return 1./(1. + exp(-x));}
static inline float loggy_activate(float x){return 2./(1. + exp(-x)) - 1;}
static inline float relu_activate(float x){return x*(x>0);}
static inline float elu_activate(float x){return (x >= 0)*x + (x < 0)*(exp(x)-1);}
static inline float relie_activate(float x){return (x>0) ? x : .01*x;}
static inline float ramp_activate(float x){return x*(x>0)+.1*x;}
static inline float leaky_activate(float x){return (x>0) ? x : .1*x;}
static inline float tanh_activate(float x){return (exp(2*x)-1)/(exp(2*x)+1);}
static inline float plse_activate(float x)
{
if(x < -4) return .01 * (x + 4);
if(x > 4) return .01 * (x - 4) + 1;
return .125*x + .5;
}
static inline float lhtan_activate(float x)
{
if(x < 0) return .001*x;
if(x > 1) return .001*(x-1) + 1;
return x;
}
static inline float lhtan_gradient(float x)
{
if(x > 0 && x < 1) return 1;
return .001;
}
static inline float hardtan_gradient(float x)
{
if (x > -1 && x < 1) return 1;
return 0;
}
static inline float linear_gradient(float x){return 1;}
static inline float logistic_gradient(float x){return (1-x)*x;}
static inline float loggy_gradient(float x)
{
float y = (x+1.)/2.;
return 2*(1-y)*y;
}
static inline float stair_gradient(float x)
{
if (floor(x) == x) return 0;
return 1;
}
static inline float relu_gradient(float x){return (x>0);}
static inline float elu_gradient(float x){return (x >= 0) + (x < 0)*(x + 1);}
static inline float relie_gradient(float x){return (x>0) ? 1 : .01;}
static inline float ramp_gradient(float x){return (x>0)+.1;}
static inline float leaky_gradient(float x){return (x>0) ? 1 : .1;}
static inline float tanh_gradient(float x){return 1-x*x;}
static inline float plse_gradient(float x){return (x < 0 || x > 1) ? .01 : .125;}
#endif
#include "avgpool_layer.h"
#include "cuda.h"
#include <stdio.h>
avgpool_layer make_avgpool_layer(int batch, int w, int h, int c)
{
fprintf(stderr, "avg %4d x%4d x%4d -> %4d\n", w, h, c, c);
avgpool_layer l = {0};
l.type = AVGPOOL;
l.batch = batch;
l.h = h;
l.w = w;
l.c = c;
l.out_w = 1;
l.out_h = 1;
l.out_c = c;
l.outputs = l.out_c;
l.inputs = h*w*c;
int output_size = l.outputs * batch;
l.output = calloc(output_size, sizeof(float));
l.delta = calloc(output_size, sizeof(float));
l.forward = forward_avgpool_layer;
l.backward = backward_avgpool_layer;
#ifdef GPU
l.forward_gpu = forward_avgpool_layer_gpu;
l.backward_gpu = backward_avgpool_layer_gpu;
l.output_gpu = cuda_make_array(l.output, output_size);
l.delta_gpu = cuda_make_array(l.delta, output_size);
#endif
return l;
}
void resize_avgpool_layer(avgpool_layer *l, int w, int h)
{
l->w = w;
l->h = h;
l->inputs = h*w*l->c;
}
void forward_avgpool_layer(const avgpool_layer l, network net)
{
int b,i,k;
for(b = 0; b < l.batch; ++b){
for(k = 0; k < l.c; ++k){
int out_index = k + b*l.c;
l.output[out_index] = 0;
for(i = 0; i < l.h*l.w; ++i){
int in_index = i + l.h*l.w*(k + b*l.c);
l.output[out_index] += net.input[in_index];
}
l.output[out_index] /= l.h*l.w;
}
}
}
void backward_avgpool_layer(const avgpool_layer l, network net)
{
int b,i,k;
for(b = 0; b < l.batch; ++b){
for(k = 0; k < l.c; ++k){
int out_index = k + b*l.c;
for(i = 0; i < l.h*l.w; ++i){
int in_index = i + l.h*l.w*(k + b*l.c);
net.delta[in_index] += l.delta[out_index] / (l.h*l.w);
}
}
}
}
#ifndef AVGPOOL_LAYER_H
#define AVGPOOL_LAYER_H
#include "image.h"
#include "cuda.h"
#include "layer.h"
#include "network.h"
typedef layer avgpool_layer;
image get_avgpool_image(avgpool_layer l);
avgpool_layer make_avgpool_layer(int batch, int w, int h, int c);
void resize_avgpool_layer(avgpool_layer *l, int w, int h);
void forward_avgpool_layer(const avgpool_layer l, network net);
void backward_avgpool_layer(const avgpool_layer l, network net);
#ifdef GPU
void forward_avgpool_layer_gpu(avgpool_layer l, network net);
void backward_avgpool_layer_gpu(avgpool_layer l, network net);
#endif
#endif
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
extern "C" {
#include "avgpool_layer.h"
#include "cuda.h"
}
__global__ void forward_avgpool_layer_kernel(int n, int w, int h, int c, float *input, float *output)
{
int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(id >= n) return;
int k = id % c;
id /= c;
int b = id;
int i;
int out_index = (k + c*b);
output[out_index] = 0;
for(i = 0; i < w*h; ++i){
int in_index = i + h*w*(k + b*c);
output[out_index] += input[in_index];
}
output[out_index] /= w*h;
}
__global__ void backward_avgpool_layer_kernel(int n, int w, int h, int c, float *in_delta, float *out_delta)
{
int id = (blockIdx.x + blockIdx.y*gridDim.x) * blockDim.x + threadIdx.x;
if(id >= n) return;
int k = id % c;
id /= c;
int b = id;
int i;
int out_index = (k + c*b);
for(i = 0; i < w*h; ++i){
int in_index = i + h*w*(k + b*c);
in_delta[in_index] += out_delta[out_index] / (w*h);
}
}
extern "C" void forward_avgpool_layer_gpu(avgpool_layer layer, network net)
{
size_t n = layer.c*layer.batch;
forward_avgpool_layer_kernel<<<cuda_gridsize(n), BLOCK>>>(n, layer.w, layer.h, layer.c, net.input_gpu, layer.output_gpu);
check_error(cudaPeekAtLastError());
}
extern "C" void backward_avgpool_layer_gpu(avgpool_layer layer, network net)
{
size_t n = layer.c*layer.batch;
backward_avgpool_layer_kernel<<<cuda_gridsize(n), BLOCK>>>(n, layer.w, layer.h, layer.c, net.delta_gpu, layer.delta_gpu);
check_error(cudaPeekAtLastError());
}
#ifndef BATCHNORM_LAYER_H
#define BATCHNORM_LAYER_H
#include "image.h"
#include "layer.h"
#include "network.h"
layer make_batchnorm_layer(int batch, int w, int h, int c);
void forward_batchnorm_layer(layer l, network net);
void backward_batchnorm_layer(layer l, network net);
#ifdef GPU
void forward_batchnorm_layer_gpu(layer l, network net);
void backward_batchnorm_layer_gpu(layer l, network net);
void pull_batchnorm_layer(layer l);
void push_batchnorm_layer(layer l);
#endif
#endif
#include "blas.h"
#include <math.h>
#include <assert.h>
#include <float.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void reorg_cpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out)
{
int b,i,j,k;
int out_c = c/(stride*stride);
for(b = 0; b < batch; ++b){
for(k = 0; k < c; ++k){
for(j = 0; j < h; ++j){
for(i = 0; i < w; ++i){
int in_index = i + w*(j + h*(k + c*b));
int c2 = k % out_c;
int offset = k / out_c;
int w2 = i*stride + offset % stride;
int h2 = j*stride + offset / stride;
int out_index = w2 + w*stride*(h2 + h*stride*(c2 + out_c*b));
if(forward) out[out_index] = x[in_index];
else out[in_index] = x[out_index];
}
}
}
}
}
void flatten(float *x, int size, int layers, int batch, int forward)
{
float *swap = calloc(size*layers*batch, sizeof(float));
int i,c,b;
for(b = 0; b < batch; ++b){
for(c = 0; c < layers; ++c){
for(i = 0; i < size; ++i){
int i1 = b*layers*size + c*size + i;
int i2 = b*layers*size + i*layers + c;
if (forward) swap[i2] = x[i1];
else swap[i1] = x[i2];
}
}
}
memcpy(x, swap, size*layers*batch*sizeof(float));
free(swap);
}
void weighted_sum_cpu(float *a, float *b, float *s, int n, float *c)
{
int i;
for(i = 0; i < n; ++i){
c[i] = s[i]*a[i] + (1-s[i])*(b ? b[i] : 0);
}
}
void weighted_delta_cpu(float *a, float *b, float *s, float *da, float *db, float *ds, int n, float *dc)
{
int i;
for(i = 0; i < n; ++i){
if(da) da[i] += dc[i] * s[i];
if(db) db[i] += dc[i] * (1-s[i]);
ds[i] += dc[i] * (a[i] - b[i]);
}
}
void shortcut_cpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float *out)
{
int stride = w1/w2;
int sample = w2/w1;
assert(stride == h1/h2);
assert(sample == h2/h1);
if(stride < 1) stride = 1;
if(sample < 1) sample = 1;
int minw = (w1 < w2) ? w1 : w2;
int minh = (h1 < h2) ? h1 : h2;
int minc = (c1 < c2) ? c1 : c2;
int i,j,k,b;
for(b = 0; b < batch; ++b){
for(k = 0; k < minc; ++k){
for(j = 0; j < minh; ++j){
for(i = 0; i < minw; ++i){
int out_index = i*sample + w2*(j*sample + h2*(k + c2*b));
int add_index = i*stride + w1*(j*stride + h1*(k + c1*b));
out[out_index] += add[add_index];
}
}
}
}
}
void mean_cpu(float *x, int batch, int filters, int spatial, float *mean)
{
float scale = 1./(batch * spatial);
int i,j,k;
for(i = 0; i < filters; ++i){
mean[i] = 0;
for(j = 0; j < batch; ++j){
for(k = 0; k < spatial; ++k){
int index = j*filters*spatial + i*spatial + k;
mean[i] += x[index];
}
}
mean[i] *= scale;
}
}
void variance_cpu(float *x, float *mean, int batch, int filters, int spatial, float *variance)
{
float scale = 1./(batch * spatial - 1);
int i,j,k;
for(i = 0; i < filters; ++i){
variance[i] = 0;
for(j = 0; j < batch; ++j){
for(k = 0; k < spatial; ++k){
int index = j*filters*spatial + i*spatial + k;
variance[i] += pow((x[index] - mean[i]), 2);
}
}
variance[i] *= scale;
}
}
void normalize_cpu(float *x, float *mean, float *variance, int batch, int filters, int spatial)
{
int b, f, i;
for(b = 0; b < batch; ++b){
for(f = 0; f < filters; ++f){
for(i = 0; i < spatial; ++i){
int index = b*filters*spatial + f*spatial + i;
x[index] = (x[index] - mean[f])/(sqrt(variance[f]) + .000001f);
}
}
}
}
void const_cpu(int N, float ALPHA, float *X, int INCX)
{
int i;
for(i = 0; i < N; ++i) X[i*INCX] = ALPHA;
}
void mul_cpu(int N, float *X, int INCX, float *Y, int INCY)
{
int i;
for(i = 0; i < N; ++i) Y[i*INCY] *= X[i*INCX];
}
void pow_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY)
{
int i;
for(i = 0; i < N; ++i) Y[i*INCY] = pow(X[i*INCX], ALPHA);
}
void axpy_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY)
{
int i;
for(i = 0; i < N; ++i) Y[i*INCY] += ALPHA*X[i*INCX];
}
void scal_cpu(int N, float ALPHA, float *X, int INCX)
{
int i;
for(i = 0; i < N; ++i) X[i*INCX] *= ALPHA;
}
void fill_cpu(int N, float ALPHA, float *X, int INCX)
{
int i;
for(i = 0; i < N; ++i) X[i*INCX] = ALPHA;
}
void deinter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT)
{
int i, j;
int index = 0;
for(j = 0; j < B; ++j) {
for(i = 0; i < NX; ++i){
if(X) X[j*NX + i] += OUT[index];
++index;
}
for(i = 0; i < NY; ++i){
if(Y) Y[j*NY + i] += OUT[index];
++index;
}
}
}
void inter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT)
{
int i, j;
int index = 0;
for(j = 0; j < B; ++j) {
for(i = 0; i < NX; ++i){
OUT[index++] = X[j*NX + i];
}
for(i = 0; i < NY; ++i){
OUT[index++] = Y[j*NY + i];
}
}
}
void copy_cpu(int N, float *X, int INCX, float *Y, int INCY)
{
int i;
for(i = 0; i < N; ++i) Y[i*INCY] = X[i*INCX];
}
void mult_add_into_cpu(int N, float *X, float *Y, float *Z)
{
int i;
for(i = 0; i < N; ++i) Z[i] += X[i]*Y[i];
}
void smooth_l1_cpu(int n, float *pred, float *truth, float *delta, float *error)
{
int i;
for(i = 0; i < n; ++i){
float diff = truth[i] - pred[i];
float abs_val = fabs(diff);
if(abs_val < 1) {
error[i] = diff * diff;
delta[i] = diff;
}
else {
error[i] = 2*abs_val - 1;
delta[i] = (diff < 0) ? 1 : -1;
}
}
}
void l1_cpu(int n, float *pred, float *truth, float *delta, float *error)
{
int i;
for(i = 0; i < n; ++i){
float diff = truth[i] - pred[i];
error[i] = fabs(diff);
delta[i] = diff > 0 ? 1 : -1;
}
}
void l2_cpu(int n, float *pred, float *truth, float *delta, float *error)
{
int i;
for(i = 0; i < n; ++i){
float diff = truth[i] - pred[i];
error[i] = diff * diff;
delta[i] = diff;
}
}
float dot_cpu(int N, float *X, int INCX, float *Y, int INCY)
{
int i;
float dot = 0;
for(i = 0; i < N; ++i) dot += X[i*INCX] * Y[i*INCY];
return dot;
}
void softmax(float *input, int n, float temp, int stride, float *output)
{
int i;
float sum = 0;
float largest = -FLT_MAX;
for(i = 0; i < n; ++i){
if(input[i*stride] > largest) largest = input[i*stride];
}
for(i = 0; i < n; ++i){
float e = exp(input[i*stride]/temp - largest/temp);
sum += e;
output[i*stride] = e;
}
for(i = 0; i < n; ++i){
output[i*stride] /= sum;
}
}
void softmax_cpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output)
{
int g, b;
for(b = 0; b < batch; ++b){
for(g = 0; g < groups; ++g){
softmax(input + b*batch_offset + g*group_offset, n, temp, stride, output + b*batch_offset + g*group_offset);
}
}
}
#ifndef BLAS_H
#define BLAS_H
#include "darknet.h"
void flatten(float *x, int size, int layers, int batch, int forward);
void pm(int M, int N, float *A);
float *random_matrix(int rows, int cols);
void time_random_matrix(int TA, int TB, int m, int k, int n);
void reorg_cpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out);
void test_blas();
void inter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT);
void deinter_cpu(int NX, float *X, int NY, float *Y, int B, float *OUT);
void mult_add_into_cpu(int N, float *X, float *Y, float *Z);
void const_cpu(int N, float ALPHA, float *X, int INCX);
void constrain_gpu(int N, float ALPHA, float * X, int INCX);
void pow_cpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY);
void mul_cpu(int N, float *X, int INCX, float *Y, int INCY);
void fill_cpu(int N, float ALPHA, float * X, int INCX);
float dot_cpu(int N, float *X, int INCX, float *Y, int INCY);
int test_gpu_blas();
void shortcut_cpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float *out);
void mean_cpu(float *x, int batch, int filters, int spatial, float *mean);
void variance_cpu(float *x, float *mean, int batch, int filters, int spatial, float *variance);
void scale_bias(float *output, float *scales, int batch, int n, int size);
void backward_scale_cpu(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates);
void mean_delta_cpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta);
void variance_delta_cpu(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta);
void normalize_delta_cpu(float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta);
void smooth_l1_cpu(int n, float *pred, float *truth, float *delta, float *error);
void l2_cpu(int n, float *pred, float *truth, float *delta, float *error);
void l1_cpu(int n, float *pred, float *truth, float *delta, float *error);
void weighted_sum_cpu(float *a, float *b, float *s, int num, float *c);
void weighted_delta_cpu(float *a, float *b, float *s, float *da, float *db, float *ds, int n, float *dc);
void softmax(float *input, int n, float temp, int stride, float *output);
void softmax_cpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output);
#ifdef GPU
#include "cuda.h"
#include "tree.h"
void axpy_gpu(int N, float ALPHA, float * X, int INCX, float * Y, int INCY);
void axpy_gpu_offset(int N, float ALPHA, float * X, int OFFX, int INCX, float * Y, int OFFY, int INCY);
void copy_gpu(int N, float * X, int INCX, float * Y, int INCY);
void copy_gpu_offset(int N, float * X, int OFFX, int INCX, float * Y, int OFFY, int INCY);
void add_gpu(int N, float ALPHA, float * X, int INCX);
void supp_gpu(int N, float ALPHA, float * X, int INCX);
void mask_gpu(int N, float * X, float mask_num, float * mask);
void scale_mask_gpu(int N, float * X, float mask_num, float * mask, float scale);
void const_gpu(int N, float ALPHA, float *X, int INCX);
void pow_gpu(int N, float ALPHA, float *X, int INCX, float *Y, int INCY);
void mul_gpu(int N, float *X, int INCX, float *Y, int INCY);
void mean_gpu(float *x, int batch, int filters, int spatial, float *mean);
void variance_gpu(float *x, float *mean, int batch, int filters, int spatial, float *variance);
void normalize_gpu(float *x, float *mean, float *variance, int batch, int filters, int spatial);
void normalize_delta_gpu(float *x, float *mean, float *variance, float *mean_delta, float *variance_delta, int batch, int filters, int spatial, float *delta);
void fast_mean_delta_gpu(float *delta, float *variance, int batch, int filters, int spatial, float *mean_delta);
void fast_variance_delta_gpu(float *x, float *delta, float *mean, float *variance, int batch, int filters, int spatial, float *variance_delta);
void fast_variance_gpu(float *x, float *mean, int batch, int filters, int spatial, float *variance);
void fast_mean_gpu(float *x, int batch, int filters, int spatial, float *mean);
void shortcut_gpu(int batch, int w1, int h1, int c1, float *add, int w2, int h2, int c2, float *out);
void scale_bias_gpu(float *output, float *biases, int batch, int n, int size);
void backward_scale_gpu(float *x_norm, float *delta, int batch, int n, int size, float *scale_updates);
void scale_bias_gpu(float *output, float *biases, int batch, int n, int size);
void add_bias_gpu(float *output, float *biases, int batch, int n, int size);
void backward_bias_gpu(float *bias_updates, float *delta, int batch, int n, int size);
void smooth_l1_gpu(int n, float *pred, float *truth, float *delta, float *error);
void l2_gpu(int n, float *pred, float *truth, float *delta, float *error);
void l1_gpu(int n, float *pred, float *truth, float *delta, float *error);
void weighted_delta_gpu(float *a, float *b, float *s, float *da, float *db, float *ds, int num, float *dc);
void weighted_sum_gpu(float *a, float *b, float *s, int num, float *c);
void mult_add_into_gpu(int num, float *a, float *b, float *c);
void inter_gpu(int NX, float *X, int NY, float *Y, int B, float *OUT);
void deinter_gpu(int NX, float *X, int NY, float *Y, int B, float *OUT);
void reorg_gpu(float *x, int w, int h, int c, int batch, int stride, int forward, float *out);
void softmax_gpu(float *input, int n, int batch, int batch_offset, int groups, int group_offset, int stride, float temp, float *output);
void adam_update_gpu(float *w, float *d, float *m, float *v, float B1, float B2, float eps, float decay, float rate, int n, int batch, int t);
void adam_gpu(int n, float *x, float *m, float *v, float B1, float B2, float rate, float eps, int t);
void flatten_gpu(float *x, int spatial, int layers, int batch, int forward, float *out);
void softmax_tree(float *input, int spatial, int batch, int stride, float temp, float *output, tree hier);
#endif
#endif
#include "box.h"
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
box float_to_box(float *f, int stride)
{
box b;
b.x = f[0];
b.y = f[1*stride];
b.w = f[2*stride];
b.h = f[3*stride];
return b;
}
dbox derivative(box a, box b)
{
dbox d;
d.dx = 0;
d.dw = 0;
float l1 = a.x - a.w/2;
float l2 = b.x - b.w/2;
if (l1 > l2){
d.dx -= 1;
d.dw += .5;
}
float r1 = a.x + a.w/2;
float r2 = b.x + b.w/2;
if(r1 < r2){
d.dx += 1;
d.dw += .5;
}
if (l1 > r2) {
d.dx = -1;
d.dw = 0;
}
if (r1 < l2){
d.dx = 1;
d.dw = 0;
}
d.dy = 0;
d.dh = 0;
float t1 = a.y - a.h/2;
float t2 = b.y - b.h/2;
if (t1 > t2){
d.dy -= 1;
d.dh += .5;
}
float b1 = a.y + a.h/2;
float b2 = b.y + b.h/2;
if(b1 < b2){
d.dy += 1;
d.dh += .5;
}
if (t1 > b2) {
d.dy = -1;
d.dh = 0;
}
if (b1 < t2){
d.dy = 1;
d.dh = 0;
}
return d;
}
float overlap(float x1, float w1, float x2, float w2)
{
float l1 = x1 - w1/2;
float l2 = x2 - w2/2;
float left = l1 > l2 ? l1 : l2;
float r1 = x1 + w1/2;
float r2 = x2 + w2/2;
float right = r1 < r2 ? r1 : r2;
return right - left;
}
float box_intersection(box a, box b)
{
float w = overlap(a.x, a.w, b.x, b.w);
float h = overlap(a.y, a.h, b.y, b.h);
if(w < 0 || h < 0) return 0;
float area = w*h;
return area;
}
float box_union(box a, box b)
{
float i = box_intersection(a, b);
float u = a.w*a.h + b.w*b.h - i;
return u;
}
float box_iou(box a, box b)
{
return box_intersection(a, b)/box_union(a, b);
}
float box_rmse(box a, box b)
{
return sqrt(pow(a.x-b.x, 2) +
pow(a.y-b.y, 2) +
pow(a.w-b.w, 2) +
pow(a.h-b.h, 2));
}
dbox dintersect(box a, box b)
{
float w = overlap(a.x, a.w, b.x, b.w);
float h = overlap(a.y, a.h, b.y, b.h);
dbox dover = derivative(a, b);
dbox di;
di.dw = dover.dw*h;
di.dx = dover.dx*h;
di.dh = dover.dh*w;
di.dy = dover.dy*w;
return di;
}
dbox dunion(box a, box b)
{
dbox du;
dbox di = dintersect(a, b);
du.dw = a.h - di.dw;
du.dh = a.w - di.dh;
du.dx = -di.dx;
du.dy = -di.dy;
return du;
}
void test_dunion()
{
box a = {0, 0, 1, 1};
box dxa= {0+.0001, 0, 1, 1};
box dya= {0, 0+.0001, 1, 1};
box dwa= {0, 0, 1+.0001, 1};
box dha= {0, 0, 1, 1+.0001};
box b = {.5, .5, .2, .2};
dbox di = dunion(a,b);
printf("Union: %f %f %f %f\n", di.dx, di.dy, di.dw, di.dh);
float inter = box_union(a, b);
float xinter = box_union(dxa, b);
float yinter = box_union(dya, b);
float winter = box_union(dwa, b);
float hinter = box_union(dha, b);
xinter = (xinter - inter)/(.0001);
yinter = (yinter - inter)/(.0001);
winter = (winter - inter)/(.0001);
hinter = (hinter - inter)/(.0001);
printf("Union Manual %f %f %f %f\n", xinter, yinter, winter, hinter);
}
void test_dintersect()
{
box a = {0, 0, 1, 1};
box dxa= {0+.0001, 0, 1, 1};
box dya= {0, 0+.0001, 1, 1};
box dwa= {0, 0, 1+.0001, 1};
box dha= {0, 0, 1, 1+.0001};
box b = {.5, .5, .2, .2};
dbox di = dintersect(a,b);
printf("Inter: %f %f %f %f\n", di.dx, di.dy, di.dw, di.dh);
float inter = box_intersection(a, b);
float xinter = box_intersection(dxa, b);
float yinter = box_intersection(dya, b);
float winter = box_intersection(dwa, b);
float hinter = box_intersection(dha, b);
xinter = (xinter - inter)/(.0001);
yinter = (yinter - inter)/(.0001);
winter = (winter - inter)/(.0001);
hinter = (hinter - inter)/(.0001);
printf("Inter Manual %f %f %f %f\n", xinter, yinter, winter, hinter);
}
void test_box()
{
test_dintersect();
test_dunion();
box a = {0, 0, 1, 1};
box dxa= {0+.00001, 0, 1, 1};
box dya= {0, 0+.00001, 1, 1};
box dwa= {0, 0, 1+.00001, 1};
box dha= {0, 0, 1, 1+.00001};
box b = {.5, 0, .2, .2};
float iou = box_iou(a,b);
iou = (1-iou)*(1-iou);
printf("%f\n", iou);
dbox d = diou(a, b);
printf("%f %f %f %f\n", d.dx, d.dy, d.dw, d.dh);
float xiou = box_iou(dxa, b);
float yiou = box_iou(dya, b);
float wiou = box_iou(dwa, b);
float hiou = box_iou(dha, b);
xiou = ((1-xiou)*(1-xiou) - iou)/(.00001);
yiou = ((1-yiou)*(1-yiou) - iou)/(.00001);
wiou = ((1-wiou)*(1-wiou) - iou)/(.00001);
hiou = ((1-hiou)*(1-hiou) - iou)/(.00001);
printf("manual %f %f %f %f\n", xiou, yiou, wiou, hiou);
}
dbox diou(box a, box b)
{
float u = box_union(a,b);
float i = box_intersection(a,b);
dbox di = dintersect(a,b);
dbox du = dunion(a,b);
dbox dd = {0,0,0,0};
if(i <= 0 || 1) {
dd.dx = b.x - a.x;
dd.dy = b.y - a.y;
dd.dw = b.w - a.w;
dd.dh = b.h - a.h;
return dd;
}
dd.dx = 2*pow((1-(i/u)),1)*(di.dx*u - du.dx*i)/(u*u);
dd.dy = 2*pow((1-(i/u)),1)*(di.dy*u - du.dy*i)/(u*u);
dd.dw = 2*pow((1-(i/u)),1)*(di.dw*u - du.dw*i)/(u*u);
dd.dh = 2*pow((1-(i/u)),1)*(di.dh*u - du.dh*i)/(u*u);
return dd;
}
typedef struct{
int index;
int class;
float **probs;
} sortable_bbox;
int nms_comparator(const void *pa, const void *pb)
{
sortable_bbox a = *(sortable_bbox *)pa;
sortable_bbox b = *(sortable_bbox *)pb;
float diff = a.probs[a.index][b.class] - b.probs[b.index][b.class];
if(diff < 0) return 1;
else if(diff > 0) return -1;
return 0;
}
void do_nms_obj(box *boxes, float **probs, int total, int classes, float thresh)
{
int i, j, k;
sortable_bbox *s = calloc(total, sizeof(sortable_bbox));
for(i = 0; i < total; ++i){
s[i].index = i;
s[i].class = classes;
s[i].probs = probs;
}
qsort(s, total, sizeof(sortable_bbox), nms_comparator);
for(i = 0; i < total; ++i){
if(probs[s[i].index][classes] == 0) continue;
box a = boxes[s[i].index];
for(j = i+1; j < total; ++j){
box b = boxes[s[j].index];
if (box_iou(a, b) > thresh){
for(k = 0; k < classes+1; ++k){
probs[s[j].index][k] = 0;
}
}
}
}
free(s);
}
void do_nms_sort(box *boxes, float **probs, int total, int classes, float thresh)
{
int i, j, k;
sortable_bbox *s = calloc(total, sizeof(sortable_bbox));
for(i = 0; i < total; ++i){
s[i].index = i;
s[i].class = 0;
s[i].probs = probs;
}
for(k = 0; k < classes; ++k){
for(i = 0; i < total; ++i){
s[i].class = k;
}
qsort(s, total, sizeof(sortable_bbox), nms_comparator);
for(i = 0; i < total; ++i){
if(probs[s[i].index][k] == 0) continue;
box a = boxes[s[i].index];
for(j = i+1; j < total; ++j){
box b = boxes[s[j].index];
if (box_iou(a, b) > thresh){
probs[s[j].index][k] = 0;
}
}
}
}
free(s);
}
void do_nms(box *boxes, float **probs, int total, int classes, float thresh)
{
int i, j, k;
for(i = 0; i < total; ++i){
int any = 0;
for(k = 0; k < classes; ++k) any = any || (probs[i][k] > 0);
if(!any) {
continue;
}
for(j = i+1; j < total; ++j){
if (box_iou(boxes[i], boxes[j]) > thresh){
for(k = 0; k < classes; ++k){
if (probs[i][k] < probs[j][k]) probs[i][k] = 0;
else probs[j][k] = 0;
}
}
}
}
}
box encode_box(box b, box anchor)
{
box encode;
encode.x = (b.x - anchor.x) / anchor.w;
encode.y = (b.y - anchor.y) / anchor.h;
encode.w = log2(b.w / anchor.w);
encode.h = log2(b.h / anchor.h);
return encode;
}
box decode_box(box b, box anchor)
{
box decode;
decode.x = b.x * anchor.w + anchor.x;
decode.y = b.y * anchor.h + anchor.y;
decode.w = pow(2., b.w) * anchor.w;
decode.h = pow(2., b.h) * anchor.h;
return decode;
}
#ifndef BOX_H
#define BOX_H
#include "darknet.h"
typedef struct{
float dx, dy, dw, dh;
} dbox;
float box_rmse(box a, box b);
dbox diou(box a, box b);
box decode_box(box b, box anchor);
box encode_box(box b, box anchor);
#endif
#include <stdio.h>
#include <math.h>
void col2im_add_pixel(float *im, int height, int width, int channels,
int row, int col, int channel, int pad, float val)
{
row -= pad;
col -= pad;
if (row < 0 || col < 0 ||
row >= height || col >= width) return;
im[col + width*(row + height*channel)] += val;
}
//This one might be too, can't remember.
void col2im_cpu(float* data_col,
int channels, int height, int width,
int ksize, int stride, int pad, float* data_im)
{
int c,h,w;
int height_col = (height + 2*pad - ksize) / stride + 1;
int width_col = (width + 2*pad - ksize) / stride + 1;
int channels_col = channels * ksize * ksize;
for (c = 0; c < channels_col; ++c) {
int w_offset = c % ksize;
int h_offset = (c / ksize) % ksize;
int c_im = c / ksize / ksize;
for (h = 0; h < height_col; ++h) {
for (w = 0; w < width_col; ++w) {
int im_row = h_offset + h * stride;
int im_col = w_offset + w * stride;
int col_index = (c * height_col + h) * width_col + w;
double val = data_col[col_index];
col2im_add_pixel(data_im, height, width, channels,
im_row, im_col, c_im, pad, val);
}
}
}
}
#ifndef COL2IM_H
#define COL2IM_H
void col2im_cpu(float* data_col,
int channels, int height, int width,
int ksize, int stride, int pad, float* data_im);
#ifdef GPU
void col2im_gpu(float *data_col,
int channels, int height, int width,
int ksize, int stride, int pad, float *data_im);
#endif
#endif
#include "cuda_runtime.h"
#include "curand.h"
#include "cublas_v2.h"
extern "C" {
#include "col2im.h"
#include "cuda.h"
}
// src: https://github.com/BVLC/caffe/blob/master/src/caffe/util/im2col.cu
// You may also want to read: https://github.com/BVLC/caffe/blob/master/LICENSE
__global__ void col2im_gpu_kernel(const int n, const float* data_col,
const int height, const int width, const int ksize,
const int pad,
const int stride,
const int height_col, const int width_col,
float *data_im) {
int index = blockIdx.x*blockDim.x+threadIdx.x;
for(; index < n; index += blockDim.x*gridDim.x){
float val = 0;
int w = index % width + pad;
int h = (index / width) % height + pad;
int c = index / (width * height);
// compute the start and end of the output
int w_col_start = (w < ksize) ? 0 : (w - ksize) / stride + 1;
int w_col_end = min(w / stride + 1, width_col);
int h_col_start = (h < ksize) ? 0 : (h - ksize) / stride + 1;
int h_col_end = min(h / stride + 1, height_col);
// equivalent implementation
int offset =
(c * ksize * ksize + h * ksize + w) * height_col * width_col;
int coeff_h_col = (1 - stride * ksize * height_col) * width_col;
int coeff_w_col = (1 - stride * height_col * width_col);
for (int h_col = h_col_start; h_col < h_col_end; ++h_col) {
for (int w_col = w_col_start; w_col < w_col_end; ++w_col) {
val += data_col[offset + h_col * coeff_h_col + w_col * coeff_w_col];
}
}
data_im[index] += val;
}
}
void col2im_gpu(float *data_col,
int channels, int height, int width,
int ksize, int stride, int pad, float *data_im){
// We are going to launch channels * height_col * width_col kernels, each
// kernel responsible for copying a single-channel grid.
int height_col = (height + 2 * pad - ksize) / stride + 1;
int width_col = (width + 2 * pad - ksize) / stride + 1;
int num_kernels = channels * height * width;
col2im_gpu_kernel<<<(num_kernels+BLOCK-1)/BLOCK,
BLOCK>>>(
num_kernels, data_col, height, width, ksize, pad,
stride, height_col,
width_col, data_im);
}
#ifndef CONNECTED_LAYER_H
#define CONNECTED_LAYER_H
#include "activations.h"
#include "layer.h"
#include "network.h"
layer make_connected_layer(int batch, int inputs, int outputs, ACTIVATION activation, int batch_normalize, int adam);
void forward_connected_layer(layer l, network net);
void backward_connected_layer(layer l, network net);
void update_connected_layer(layer l, update_args a);
#ifdef GPU
void forward_connected_layer_gpu(layer l, network net);
void backward_connected_layer_gpu(layer l, network net);
void update_connected_layer_gpu(layer l, update_args a);
void push_connected_layer(layer l);
void pull_connected_layer(layer l);
#endif
#endif
#ifndef CONVOLUTIONAL_LAYER_H
#define CONVOLUTIONAL_LAYER_H
#include "cuda.h"
#include "image.h"
#include "activations.h"
#include "layer.h"
#include "network.h"
typedef layer convolutional_layer;
#ifdef GPU
void forward_convolutional_layer_gpu(convolutional_layer layer, network net);
void backward_convolutional_layer_gpu(convolutional_layer layer, network net);
void update_convolutional_layer_gpu(convolutional_layer layer, update_args a);
void push_convolutional_layer(convolutional_layer layer);
void pull_convolutional_layer(convolutional_layer layer);
void add_bias_gpu(float *output, float *biases, int batch, int n, int size);
void backward_bias_gpu(float *bias_updates, float *delta, int batch, int n, int size);
void adam_update_gpu(float *w, float *d, float *m, float *v, float B1, float B2, float eps, float decay, float rate, int n, int batch, int t);
#ifdef CUDNN
void cudnn_convolutional_setup(layer *l);
#endif
#endif
convolutional_layer make_convolutional_layer(int batch, int h, int w, int c, int n, int size, int stride, int padding, ACTIVATION activation, int batch_normalize, int binary, int xnor, int adam);
void resize_convolutional_layer(convolutional_layer *layer, int w, int h);
void forward_convolutional_layer(const convolutional_layer layer, network net);
void update_convolutional_layer(convolutional_layer layer, update_args a);
image *visualize_convolutional_layer(convolutional_layer layer, char *window, image *prev_weights);
void binarize_weights(float *weights, int n, int size, float *binary);
void swap_binary(convolutional_layer *l);
void binarize_weights2(float *weights, int n, int size, char *binary, float *scales);
void backward_convolutional_layer(convolutional_layer layer, network net);
void add_bias(float *output, float *biases, int batch, int n, int size);
void backward_bias(float *bias_updates, float *delta, int batch, int n, int size);
image get_convolutional_image(convolutional_layer layer);
image get_convolutional_delta(convolutional_layer layer);
image get_convolutional_weight(convolutional_layer layer, int i);
int convolutional_out_height(convolutional_layer layer);
int convolutional_out_width(convolutional_layer layer);
#endif
#include "cost_layer.h"
#include "utils.h"
#include "cuda.h"
#include "blas.h"
#include <math.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
COST_TYPE get_cost_type(char *s)
{
if (strcmp(s, "seg")==0) return SEG;
if (strcmp(s, "sse")==0) return SSE;
if (strcmp(s, "masked")==0) return MASKED;
if (strcmp(s, "smooth")==0) return SMOOTH;
if (strcmp(s, "L1")==0) return L1;
fprintf(stderr, "Couldn't find cost type %s, going with SSE\n", s);
return SSE;
}
char *get_cost_string(COST_TYPE a)
{
switch(a){
case SEG:
return "seg";
case SSE:
return "sse";
case MASKED:
return "masked";
case SMOOTH:
return "smooth";
case L1:
return "L1";
}
return "sse";
}
cost_layer make_cost_layer(int batch, int inputs, COST_TYPE cost_type, float scale)
{
fprintf(stderr, "cost %4d\n", inputs);
cost_layer l = {0};
l.type = COST;
l.scale = scale;
l.batch = batch;
l.inputs = inputs;
l.outputs = inputs;
l.cost_type = cost_type;
l.delta = calloc(inputs*batch, sizeof(float));
l.output = calloc(inputs*batch, sizeof(float));
l.cost = calloc(1, sizeof(float));
l.forward = forward_cost_layer;
l.backward = backward_cost_layer;
#ifdef GPU
l.forward_gpu = forward_cost_layer_gpu;
l.backward_gpu = backward_cost_layer_gpu;
l.delta_gpu = cuda_make_array(l.output, inputs*batch);
l.output_gpu = cuda_make_array(l.delta, inputs*batch);
#endif
return l;
}
void resize_cost_layer(cost_layer *l, int inputs)
{
l->inputs = inputs;
l->outputs = inputs;
l->delta = realloc(l->delta, inputs*l->batch*sizeof(float));
l->output = realloc(l->output, inputs*l->batch*sizeof(float));
#ifdef GPU
cuda_free(l->delta_gpu);
cuda_free(l->output_gpu);
l->delta_gpu = cuda_make_array(l->delta, inputs*l->batch);
l->output_gpu = cuda_make_array(l->output, inputs*l->batch);
#endif
}
void forward_cost_layer(cost_layer l, network net)
{
if (!net.truth) return;
if(l.cost_type == MASKED){
int i;
for(i = 0; i < l.batch*l.inputs; ++i){
if(net.truth[i] == SECRET_NUM) net.input[i] = SECRET_NUM;
}
}
if(l.cost_type == SMOOTH){
smooth_l1_cpu(l.batch*l.inputs, net.input, net.truth, l.delta, l.output);
}else if(l.cost_type == L1){
l1_cpu(l.batch*l.inputs, net.input, net.truth, l.delta, l.output);
} else {
l2_cpu(l.batch*l.inputs, net.input, net.truth, l.delta, l.output);
}
l.cost[0] = sum_array(l.output, l.batch*l.inputs);
}
void backward_cost_layer(const cost_layer l, network net)
{
axpy_cpu(l.batch*l.inputs, l.scale, l.delta, 1, net.delta, 1);
}
#ifdef GPU
void pull_cost_layer(cost_layer l)
{
cuda_pull_array(l.delta_gpu, l.delta, l.batch*l.inputs);
}
void push_cost_layer(cost_layer l)
{
cuda_push_array(l.delta_gpu, l.delta, l.batch*l.inputs);
}
int float_abs_compare (const void * a, const void * b)
{
float fa = *(const float*) a;
if(fa < 0) fa = -fa;
float fb = *(const float*) b;
if(fb < 0) fb = -fb;
return (fa > fb) - (fa < fb);
}
void forward_cost_layer_gpu(cost_layer l, network net)
{
if (!net.truth_gpu) return;
if(l.smooth){
scal_gpu(l.batch*l.inputs, (1-l.smooth), net.truth_gpu, 1);
add_gpu(l.batch*l.inputs, l.smooth * 1./l.inputs, net.truth_gpu, 1);
}
if (l.cost_type == MASKED) {
mask_gpu(l.batch*l.inputs, net.input_gpu, SECRET_NUM, net.truth_gpu);
}
if(l.cost_type == SMOOTH){
smooth_l1_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);
} else if (l.cost_type == L1){
l1_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);
} else {
l2_gpu(l.batch*l.inputs, net.input_gpu, net.truth_gpu, l.delta_gpu, l.output_gpu);
}
if (l.cost_type == SEG && l.noobject_scale != 1) {
scale_mask_gpu(l.batch*l.inputs, l.delta_gpu, 0, net.truth_gpu, l.noobject_scale);
scale_mask_gpu(l.batch*l.inputs, l.output_gpu, 0, net.truth_gpu, l.noobject_scale);
}
if(l.ratio){
cuda_pull_array(l.delta_gpu, l.delta, l.batch*l.inputs);
qsort(l.delta, l.batch*l.inputs, sizeof(float), float_abs_compare);
int n = (1-l.ratio) * l.batch*l.inputs;
float thresh = l.delta[n];
thresh = 0;
printf("%f\n", thresh);
supp_gpu(l.batch*l.inputs, thresh, l.delta_gpu, 1);
}
if(l.thresh){
supp_gpu(l.batch*l.inputs, l.thresh*1./l.inputs, l.delta_gpu, 1);
}
cuda_pull_array(l.output_gpu, l.output, l.batch*l.inputs);
l.cost[0] = sum_array(l.output, l.batch*l.inputs);
}
void backward_cost_layer_gpu(const cost_layer l, network net)
{
axpy_gpu(l.batch*l.inputs, l.scale, l.delta_gpu, 1, net.delta_gpu, 1);
}
#endif
#ifndef COST_LAYER_H
#define COST_LAYER_H
#include "layer.h"
#include "network.h"
typedef layer cost_layer;
COST_TYPE get_cost_type(char *s);
char *get_cost_string(COST_TYPE a);
cost_layer make_cost_layer(int batch, int inputs, COST_TYPE type, float scale);
void forward_cost_layer(const cost_layer l, network net);
void backward_cost_layer(const cost_layer l, network net);
void resize_cost_layer(cost_layer *l, int inputs);
#ifdef GPU
void forward_cost_layer_gpu(cost_layer l, network net);
void backward_cost_layer_gpu(const cost_layer l, network net);
#endif
#endif
#include "crnn_layer.h"
#include "convolutional_layer.h"
#include "utils.h"
#include "cuda.h"
#include "blas.h"
#include "gemm.h"
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
static void increment_layer(layer *l, int steps)
{
int num = l->outputs*l->batch*steps;
l->output += num;
l->delta += num;
l->x += num;
l->x_norm += num;
#ifdef GPU
l->output_gpu += num;
l->delta_gpu += num;
l->x_gpu += num;
l->x_norm_gpu += num;
#endif
}
layer make_crnn_layer(int batch, int h, int w, int c, int hidden_filters, int output_filters, int steps, ACTIVATION activation, int batch_normalize)
{
fprintf(stderr, "CRNN Layer: %d x %d x %d image, %d filters\n", h,w,c,output_filters);
batch = batch / steps;
layer l = {0};
l.batch = batch;
l.type = CRNN;
l.steps = steps;
l.h = h;
l.w = w;
l.c = c;
l.out_h = h;
l.out_w = w;
l.out_c = output_filters;
l.inputs = h*w*c;
l.hidden = h * w * hidden_filters;
l.outputs = l.out_h * l.out_w * l.out_c;
l.state = calloc(l.hidden*batch*(steps+1), sizeof(float));
l.input_layer = malloc(sizeof(layer));
fprintf(stderr, "\t\t");
*(l.input_layer) = make_convolutional_layer(batch*steps, h, w, c, hidden_filters, 3, 1, 1, activation, batch_normalize, 0, 0, 0);
l.input_layer->batch = batch;
l.self_layer = malloc(sizeof(layer));
fprintf(stderr, "\t\t");
*(l.self_layer) = make_convolutional_layer(batch*steps, h, w, hidden_filters, hidden_filters, 3, 1, 1, activation, batch_normalize, 0, 0, 0);
l.self_layer->batch = batch;
l.output_layer = malloc(sizeof(layer));
fprintf(stderr, "\t\t");
*(l.output_layer) = make_convolutional_layer(batch*steps, h, w, hidden_filters, output_filters, 3, 1, 1, activation, batch_normalize, 0, 0, 0);
l.output_layer->batch = batch;
l.output = l.output_layer->output;
l.delta = l.output_layer->delta;
l.forward = forward_crnn_layer;
l.backward = backward_crnn_layer;
l.update = update_crnn_layer;
#ifdef GPU
l.forward_gpu = forward_crnn_layer_gpu;
l.backward_gpu = backward_crnn_layer_gpu;
l.update_gpu = update_crnn_layer_gpu;
l.state_gpu = cuda_make_array(l.state, l.hidden*batch*(steps+1));
l.output_gpu = l.output_layer->output_gpu;
l.delta_gpu = l.output_layer->delta_gpu;
#endif
return l;
}
void update_crnn_layer(layer l, update_args a)
{
update_convolutional_layer(*(l.input_layer), a);
update_convolutional_layer(*(l.self_layer), a);
update_convolutional_layer(*(l.output_layer), a);
}
void forward_crnn_layer(layer l, network net)
{
network s = net;
s.train = net.train;
int i;
layer input_layer = *(l.input_layer);
layer self_layer = *(l.self_layer);
layer output_layer = *(l.output_layer);
fill_cpu(l.outputs * l.batch * l.steps, 0, output_layer.delta, 1);
fill_cpu(l.hidden * l.batch * l.steps, 0, self_layer.delta, 1);
fill_cpu(l.hidden * l.batch * l.steps, 0, input_layer.delta, 1);
if(net.train) fill_cpu(l.hidden * l.batch, 0, l.state, 1);
for (i = 0; i < l.steps; ++i) {
s.input = net.input;
forward_convolutional_layer(input_layer, s);
s.input = l.state;
forward_convolutional_layer(self_layer, s);
float *old_state = l.state;
if(net.train) l.state += l.hidden*l.batch;
if(l.shortcut){
copy_cpu(l.hidden * l.batch, old_state, 1, l.state, 1);
}else{
fill_cpu(l.hidden * l.batch, 0, l.state, 1);
}
axpy_cpu(l.hidden * l.batch, 1, input_layer.output, 1, l.state, 1);
axpy_cpu(l.hidden * l.batch, 1, self_layer.output, 1, l.state, 1);
s.input = l.state;
forward_convolutional_layer(output_layer, s);
net.input += l.inputs*l.batch;
increment_layer(&input_layer, 1);
increment_layer(&self_layer, 1);
increment_layer(&output_layer, 1);
}
}
void backward_crnn_layer(layer l, network net)
{
network s = net;
int i;
layer input_layer = *(l.input_layer);
layer self_layer = *(l.self_layer);
layer output_layer = *(l.output_layer);
increment_layer(&input_layer, l.steps-1);
increment_layer(&self_layer, l.steps-1);
increment_layer(&output_layer, l.steps-1);
l.state += l.hidden*l.batch*l.steps;
for (i = l.steps-1; i >= 0; --i) {
copy_cpu(l.hidden * l.batch, input_layer.output, 1, l.state, 1);
axpy_cpu(l.hidden * l.batch, 1, self_layer.output, 1, l.state, 1);
s.input = l.state;
s.delta = self_layer.delta;
backward_convolutional_layer(output_layer, s);
l.state -= l.hidden*l.batch;
/*
if(i > 0){
copy_cpu(l.hidden * l.batch, input_layer.output - l.hidden*l.batch, 1, l.state, 1);
axpy_cpu(l.hidden * l.batch, 1, self_layer.output - l.hidden*l.batch, 1, l.state, 1);
}else{
fill_cpu(l.hidden * l.batch, 0, l.state, 1);
}
*/
s.input = l.state;
s.delta = self_layer.delta - l.hidden*l.batch;
if (i == 0) s.delta = 0;
backward_convolutional_layer(self_layer, s);
copy_cpu(l.hidden*l.batch, self_layer.delta, 1, input_layer.delta, 1);
if (i > 0 && l.shortcut) axpy_cpu(l.hidden*l.batch, 1, self_layer.delta, 1, self_layer.delta - l.hidden*l.batch, 1);
s.input = net.input + i*l.inputs*l.batch;
if(net.delta) s.delta = net.delta + i*l.inputs*l.batch;
else s.delta = 0;
backward_convolutional_layer(input_layer, s);
increment_layer(&input_layer, -1);
increment_layer(&self_layer, -1);
increment_layer(&output_layer, -1);
}
}
#ifdef GPU
void pull_crnn_layer(layer l)
{
pull_convolutional_layer(*(l.input_layer));
pull_convolutional_layer(*(l.self_layer));
pull_convolutional_layer(*(l.output_layer));
}
void push_crnn_layer(layer l)
{
push_convolutional_layer(*(l.input_layer));
push_convolutional_layer(*(l.self_layer));
push_convolutional_layer(*(l.output_layer));
}
void update_crnn_layer_gpu(layer l, update_args a)
{
update_convolutional_layer_gpu(*(l.input_layer), a);
update_convolutional_layer_gpu(*(l.self_layer), a);
update_convolutional_layer_gpu(*(l.output_layer), a);
}
void forward_crnn_layer_gpu(layer l, network net)
{
network s = net;
int i;
layer input_layer = *(l.input_layer);
layer self_layer = *(l.self_layer);
layer output_layer = *(l.output_layer);
fill_gpu(l.outputs * l.batch * l.steps, 0, output_layer.delta_gpu, 1);
fill_gpu(l.hidden * l.batch * l.steps, 0, self_layer.delta_gpu, 1);
fill_gpu(l.hidden * l.batch * l.steps, 0, input_layer.delta_gpu, 1);
if(net.train) fill_gpu(l.hidden * l.batch, 0, l.state_gpu, 1);
for (i = 0; i < l.steps; ++i) {
s.input_gpu = net.input_gpu;
forward_convolutional_layer_gpu(input_layer, s);
s.input_gpu = l.state_gpu;
forward_convolutional_layer_gpu(self_layer, s);
float *old_state = l.state_gpu;
if(net.train) l.state_gpu += l.hidden*l.batch;
if(l.shortcut){
copy_gpu(l.hidden * l.batch, old_state, 1, l.state_gpu, 1);
}else{
fill_gpu(l.hidden * l.batch, 0, l.state_gpu, 1);
}
axpy_gpu(l.hidden * l.batch, 1, input_layer.output_gpu, 1, l.state_gpu, 1);
axpy_gpu(l.hidden * l.batch, 1, self_layer.output_gpu, 1, l.state_gpu, 1);
s.input_gpu = l.state_gpu;
forward_convolutional_layer_gpu(output_layer, s);
net.input_gpu += l.inputs*l.batch;
increment_layer(&input_layer, 1);
increment_layer(&self_layer, 1);
increment_layer(&output_layer, 1);
}
}
void backward_crnn_layer_gpu(layer l, network net)
{
network s = net;
s.train = net.train;
int i;
layer input_layer = *(l.input_layer);
layer self_layer = *(l.self_layer);
layer output_layer = *(l.output_layer);
increment_layer(&input_layer, l.steps - 1);
increment_layer(&self_layer, l.steps - 1);
increment_layer(&output_layer, l.steps - 1);
l.state_gpu += l.hidden*l.batch*l.steps;
for (i = l.steps-1; i >= 0; --i) {
copy_gpu(l.hidden * l.batch, input_layer.output_gpu, 1, l.state_gpu, 1);
axpy_gpu(l.hidden * l.batch, 1, self_layer.output_gpu, 1, l.state_gpu, 1);
s.input_gpu = l.state_gpu;
s.delta_gpu = self_layer.delta_gpu;
backward_convolutional_layer_gpu(output_layer, s);
l.state_gpu -= l.hidden*l.batch;
s.input_gpu = l.state_gpu;
s.delta_gpu = self_layer.delta_gpu - l.hidden*l.batch;
if (i == 0) s.delta_gpu = 0;
backward_convolutional_layer_gpu(self_layer, s);
copy_gpu(l.hidden*l.batch, self_layer.delta_gpu, 1, input_layer.delta_gpu, 1);
if (i > 0 && l.shortcut) axpy_gpu(l.hidden*l.batch, 1, self_layer.delta_gpu, 1, self_layer.delta_gpu - l.hidden*l.batch, 1);
s.input_gpu = net.input_gpu + i*l.inputs*l.batch;
if(net.delta_gpu) s.delta_gpu = net.delta_gpu + i*l.inputs*l.batch;
else s.delta_gpu = 0;
backward_convolutional_layer_gpu(input_layer, s);
increment_layer(&input_layer, -1);
increment_layer(&self_layer, -1);
increment_layer(&output_layer, -1);
}
}
#endif
#ifndef CRNN_LAYER_H
#define CRNN_LAYER_H
#include "activations.h"
#include "layer.h"
#include "network.h"
layer make_crnn_layer(int batch, int h, int w, int c, int hidden_filters, int output_filters, int steps, ACTIVATION activation, int batch_normalize);
void forward_crnn_layer(layer l, network net);
void backward_crnn_layer(layer l, network net);
void update_crnn_layer(layer l, update_args a);
#ifdef GPU
void forward_crnn_layer_gpu(layer l, network net);
void backward_crnn_layer_gpu(layer l, network net);
void update_crnn_layer_gpu(layer l, update_args a);
void push_crnn_layer(layer l);
void pull_crnn_layer(layer l);
#endif
#endif
This diff could not be displayed because it is too large.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or sign in to comment