开启辅助访问 切换到宽版

精易论坛

 找回密码
 注册

QQ登录

只需一步,快速开始

用微信号发送消息登录论坛

新人指南 邀请好友注册 - 我关注人的新帖 教你赚取精币 - 每日签到


求职/招聘- 论坛接单- 开发者大厅

论坛版规 总版规 - 建议/投诉 - 应聘版主 - 精华帖总集 积分说明 - 禁言标准 - 有奖举报

查看: 16532|回复: 5
收起左侧

[图文教程] 基于TensorFlow识别Captcha库验证码图文教程

[复制链接]

结帖率:76% (32/42)
发表于 2018-5-30 22:32:26 | 显示全部楼层 |阅读模式   山东省烟台市
本帖最后由 yueying 于 2018-5-30 22:41 编辑

2vU7.jpg

准确度在96%左右,模型是腾讯云课堂公布的,我在机器上运行了6个小时,识别效果很好,我对其教程进行的整理,下面还原了一下

惊喜:看到最下面有关于识别英数验证码的网络模型

简介
传统的验证码识别算法一般需要把验证码分割为单个字符,然后逐个识别。本教程将验证码识别问题转化为分类的问题,实现对验证码进行整体识别。
  • 步骤简介
本教程一共分为四个部分
  • generate_captcha.py - 利用 Captcha 库生成验证码;
  • captcha_model.py - CNN 模型;
  • train_captcha.py - 训练 CNN 模型;
  • predict_captcha.py - 识别验证码。

1、通过pip安装Captcha库
  1. pip install captcha
复制代码


2、获取训练数据
本教程使用的验证码由数字、大写字母、小写字母组成,每个验证码包含 4 个字符,总共有 62^4 种组合,所以一 共有 62^4 种不同的验证码。

3、请创建generate_captcha.py文件并粘贴以下源代码:
  1. #-*- coding:utf-8 -*-
  2. from captcha.image import ImageCaptcha
  3. from PIL import Image
  4. import numpy as np
  5. import random
  6. import string

  7. class generateCaptcha():
  8.     def __init__(self,
  9.                  width = 160,#验证码图片的宽
  10.                  height = 60,#验证码图片的高
  11.                  char_num = 4,#验证码字符个数
  12.                  characters = string.digits + string.ascii_uppercase + string.ascii_lowercase):#验证码组成,数字+大写字母+小写字母
  13.         self.width = width
  14.         self.height = height
  15.         self.char_num = char_num
  16.         self.characters = characters
  17.         self.classes = len(characters)

  18.     def gen_captcha(self,batch_size = 50):
  19.         X = np.zeros([batch_size,self.height,self.width,1])
  20.         img = np.zeros((self.height,self.width),dtype=np.uint8)
  21.         Y = np.zeros([batch_size,self.char_num,self.classes])
  22.         image = ImageCaptcha(width = self.width,height = self.height)

  23.         while True:
  24.             for i in range(batch_size):
  25.                 captcha_str = ''.join(random.sample(self.characters,self.char_num))
  26.                 img = image.generate_image(captcha_str).convert('L')
  27.                 img = np.array(img.getdata())
  28.                 X[i] = np.reshape(img,[self.height,self.width,1])/255.0
  29.                 for j,ch in enumerate(captcha_str):
  30.                     Y[i,j,self.characters.find(ch)] = 1
  31.             Y = np.reshape(Y,(batch_size,self.char_num*self.classes))
  32.             yield X,Y

  33.     def decode_captcha(self,y):
  34.         y = np.reshape(y,(len(y),self.char_num,self.classes))
  35.         return ''.join(self.characters[x] for x in np.argmax(y,axis = 2)[0,:])

  36.     def get_parameter(self):
  37.         return self.width,self.height,self.char_num,self.characters,self.classes

  38.     def gen_test_captcha(self):
  39.         image = ImageCaptcha(width = self.width,height = self.height)
  40.         captcha_str = ''.join(random.sample(self.characters,self.char_num))
  41.         img = image.generate_image(captcha_str)
  42.         img.save(captcha_str + '.jpg')

  43.         X = np.zeros([1,self.height,self.width,1])
  44.         Y = np.zeros([1,self.char_num,self.classes])
  45.         img = img.convert('L')
  46.         img = np.array(img.getdata())
  47.         X[0] = np.reshape(img,[self.height,self.width,1])/255.0
  48.         for j,ch in enumerate(captcha_str):
  49.             Y[0,j,self.characters.find(ch)] = 1
  50.         Y = np.reshape(Y,(1,self.char_num*self.classes))
  51.         return X,Y
复制代码


理解训练数据
  • X:一个 mini-batch 的训练数据,其 shape 为 [ batch_size, height, width, 1 ],batch_size 表示每批次多少个训练数据,height 表示验证码图片的高,width 表示验证码图片的宽,1 表示图片的通道。
  • Y:X 中每个训练数据属于哪一类验证码,其形状为 [ batch_size, class ] ,对验证码中每个字符进行 One-Hot 编码,所以 class 大小为 4*62。


4、生成训练数据
  1. python
  2. from generate_captcha import generateCaptcha
  3. g = generateCaptcha()
  4. X,Y = g.gen_test_captcha()
  5. <b>查看训练数据</b>
  6. X.shape
  7. Y.shape
复制代码
上面的代码的意思是进入python环境,执行我们创建的生成验证码文件,此文件调用Captcha库生成验证码bytes数据

5、模型学习
  • CNN 模型
总共 5 层网络,前 3 层为卷积层,第 4、5 层为全连接层。对 4 层隐藏层都进行 dropout。
网络结构如下所示: input——>conv——>pool——>dropout——>conv——>pool——>dropout——>conv——>pool——>dropout——>fully connected layer——>dropout——>fully connected layer——>output
cnn_captcha.jpg

请创建captcha_model.py文件并粘贴下面源代码:
  1. # -*- coding: utf-8 -*
  2. import tensorflow as tf
  3. import math

  4. class captchaModel():
  5.     def __init__(self,
  6.                  width = 160,
  7.                  height = 60,
  8.                  char_num = 4,
  9.                  classes = 62):
  10.         self.width = width
  11.         self.height = height
  12.         self.char_num = char_num
  13.         self.classes = classes

  14.     def conv2d(self,x, W):
  15.         return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

  16.     def max_pool_2x2(self,x):
  17.         return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
  18.                               strides=[1, 2, 2, 1], padding='SAME')

  19.     def weight_variable(self,shape):
  20.         initial = tf.truncated_normal(shape, stddev=0.1)
  21.         return tf.Variable(initial)

  22.     def bias_variable(self,shape):
  23.         initial = tf.constant(0.1, shape=shape)
  24.         return tf.Variable(initial)

  25.     def create_model(self,x_images,keep_prob):
  26.         #first layer
  27.         w_conv1 = self.weight_variable([5, 5, 1, 32])
  28.         b_conv1 = self.bias_variable([32])
  29.         h_conv1 = tf.nn.relu(tf.nn.bias_add(self.conv2d(x_images, w_conv1), b_conv1))
  30.         h_pool1 = self.max_pool_2x2(h_conv1)
  31.         h_dropout1 = tf.nn.dropout(h_pool1,keep_prob)
  32.         conv_width = math.ceil(self.width/2)
  33.         conv_height = math.ceil(self.height/2)

  34.         #second layer
  35.         w_conv2 = self.weight_variable([5, 5, 32, 64])
  36.         b_conv2 = self.bias_variable([64])
  37.         h_conv2 = tf.nn.relu(tf.nn.bias_add(self.conv2d(h_dropout1, w_conv2), b_conv2))
  38.         h_pool2 = self.max_pool_2x2(h_conv2)
  39.         h_dropout2 = tf.nn.dropout(h_pool2,keep_prob)
  40.         conv_width = math.ceil(conv_width/2)
  41.         conv_height = math.ceil(conv_height/2)

  42.         #third layer
  43.         w_conv3 = self.weight_variable([5, 5, 64, 64])
  44.         b_conv3 = self.bias_variable([64])
  45.         h_conv3 = tf.nn.relu(tf.nn.bias_add(self.conv2d(h_dropout2, w_conv3), b_conv3))
  46.         h_pool3 = self.max_pool_2x2(h_conv3)
  47.         h_dropout3 = tf.nn.dropout(h_pool3,keep_prob)
  48.         conv_width = math.ceil(conv_width/2)
  49.         conv_height = math.ceil(conv_height/2)

  50.         #first fully layer
  51.         conv_width = int(conv_width)
  52.         conv_height = int(conv_height)
  53.         w_fc1 = self.weight_variable([64*conv_width*conv_height,1024])
  54.         b_fc1 = self.bias_variable([1024])
  55.         h_dropout3_flat = tf.reshape(h_dropout3,[-1,64*conv_width*conv_height])
  56.         h_fc1 = tf.nn.relu(tf.nn.bias_add(tf.matmul(h_dropout3_flat, w_fc1), b_fc1))
  57.         h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

  58.         #second fully layer
  59.         w_fc2 = self.weight_variable([1024,self.char_num*self.classes])
  60.         b_fc2 = self.bias_variable([self.char_num*self.classes])
  61.         y_conv = tf.add(tf.matmul(h_fc1_drop, w_fc2), b_fc2)

  62.         return y_conv
复制代码
6、训练 CNN 模型
每批次采用 64 个训练样本,每 100 次循环检查1次识别准确度,当精准度大于0.99时,训练结束,采用 GPU 需要 4-5 个小时左右,CPU 大概需要 20 个小时左右。

请创建train_captcha.py文件并粘贴下面源代码:
  1. #-*- coding:utf-8 -*-
  2. import tensorflow as tf
  3. import numpy as np
  4. import string
  5. import generate_captcha
  6. import captcha_model

  7. if __name__ == '__main__':
  8.     captcha = generate_captcha.generateCaptcha()
  9.     width,height,char_num,characters,classes = captcha.get_parameter()

  10.     x = tf.placeholder(tf.float32, [None, height,width,1])
  11.     y_ = tf.placeholder(tf.float32, [None, char_num*classes])
  12.     keep_prob = tf.placeholder(tf.float32)

  13.     model = captcha_model.captchaModel(width,height,char_num,classes)
  14.     y_conv = model.create_model(x,keep_prob)
  15.     cross_entropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y_,logits=y_conv))
  16.     train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

  17.     predict = tf.reshape(y_conv, [-1,char_num, classes])
  18.     real = tf.reshape(y_,[-1,char_num, classes])
  19.     correct_prediction = tf.equal(tf.argmax(predict,2), tf.argmax(real,2))
  20.     correct_prediction = tf.cast(correct_prediction, tf.float32)
  21.     accuracy = tf.reduce_mean(correct_prediction)

  22.     saver = tf.train.Saver()
  23.     with tf.Session() as sess:
  24.         sess.run(tf.global_variables_initializer())
  25.         step = 1
  26.         while True:
  27.             batch_x,batch_y = next(captcha.gen_captcha(64))
  28.             _,loss = sess.run([train_step,cross_entropy],feed_dict={x: batch_x, y_: batch_y, keep_prob: 0.75})
  29.             print ('step:%d,loss:%f' % (step,loss))
  30.             if step % 100 == 0:
  31.                 batch_x_test,batch_y_test = next(captcha.gen_captcha(100))
  32.                 acc = sess.run(accuracy, feed_dict={x: batch_x_test, y_: batch_y_test, keep_prob: 1.})
  33.                 print ('###############################################step:%d,accuracy:%f' % (step,acc))
  34.                 if acc > 0.99:
  35.                     saver.save(sess,"./capcha_model.ckpt")
  36.                     break
  37.             step += 1
复制代码



然后执行:
  1. python train_captcha.py
复制代码


执行结果:
  1. step:75193,loss:0.010931
  2. step:75194,loss:0.012859
  3. step:75195,loss:0.008747
  4. step:75196,loss:0.009147
  5. step:75197,loss:0.009351
  6. step:75198,loss:0.009746
  7. step:75199,loss:0.010014
  8. step:75200,loss:0.009024
复制代码
step为步长,每100批测试一次训练集,loss为损失值

代码文件以及训练好的模型、验证码附件打包下载链接:https://pan.baidu.com/s/1fbuvnF3RSOuxI_wqt8GpsQ 密码:ugbd


7、测试识别验证码
请创建predict_captcha.py文件并粘贴下面源代码:
  1. #-*- coding:utf-8 -*-
  2. from PIL import Image, ImageFilter
  3. import tensorflow as tf
  4. import numpy as np
  5. import string
  6. import sys
  7. import generate_captcha
  8. import captcha_model

  9. if __name__ == '__main__':
  10.     captcha = generate_captcha.generateCaptcha()
  11.     width,height,char_num,characters,classes = captcha.get_parameter()

  12.     gray_image = Image.open(sys.argv[1]).convert('L')
  13.     img = np.array(gray_image.getdata())
  14.     test_x = np.reshape(img,[height,width,1])/255.0
  15.     x = tf.placeholder(tf.float32, [None, height,width,1])
  16.     keep_prob = tf.placeholder(tf.float32)

  17.     model = captcha_model.captchaModel(width,height,char_num,classes)
  18.     y_conv = model.create_model(x,keep_prob)
  19.     predict = tf.argmax(tf.reshape(y_conv, [-1,char_num, classes]),2)
  20.     init_op = tf.global_variables_initializer()
  21.     saver = tf.train.Saver()
  22.     gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.95)
  23.     with tf.Session(config=tf.ConfigProto(log_device_placement=False,gpu_options=gpu_options)) as sess:
  24.         sess.run(init_op)
  25.         saver.restore(sess, "capcha_model.ckpt")
  26.         pre_list =  sess.run(predict,feed_dict={x: [test_x], keep_prob: 1})
  27.         for i in pre_list:
  28.             s = ''
  29.             for j in i:
  30.                 s += characters[j]
  31.             print(s)
复制代码
然后执行:
  1. python predict_captcha.py captcha/0hWn.jpg
复制代码


动态效果:
temp.gif

另外,今天有位github上的小哥通过QQ联系到我,共享了mbus验证码识别平台依赖的XG-CNN英数验证码网络模型的代码,仅为猜测,并未测试,感谢这位网名Diamond的同学
请创建resnet18.prototxt文件并粘贴以下源代码:
  1. name: "ResNet-18"
  2. layer {
  3.     name: "data"
  4.     type: "Data"
  5.     top: "data"
  6.     top: "label"
  7.     include {
  8.         phase: TRAIN
  9.     }
  10.     transform_param {
  11.         mirror: true
  12.         crop_size: 224
  13.         mean_value: 104
  14.         mean_value: 117
  15.         mean_value: 123
  16.     }
  17.     data_param {
  18.         source: ""
  19.         batch_size: 100

  20.     }
  21.      input_param {
  22.              shape: {
  23.             dim: 10
  24.             dim: 3
  25.             dim: 224
  26.             dim: 224
  27.         }
  28.     }
  29. }
  30. layer {
  31.     name: "data"
  32.     type: "Data"
  33.     top: "data"
  34.     top: "label"
  35.     include {
  36.         phase: TEST
  37.     }
  38.     transform_param {
  39.         mirror: false
  40.         crop_size: 224
  41.         mean_value: 104
  42.         mean_value: 117
  43.         mean_value: 123
  44.     }
  45.     data_param {
  46.         source: ""
  47.         batch_size: 50

  48.     }
  49. }

  50. layer {
  51.     bottom: "data"
  52.     top: "conv1"
  53.     name: "conv1"
  54.     type: "Convolution"
  55.     convolution_param {
  56.         num_output: 64
  57.         kernel_size: 7
  58.         pad: 3
  59.         stride: 2
  60.         weight_filler {
  61.             type: "msra"
  62.         }
  63.         bias_term: false

  64.     }
  65. }

  66. layer {
  67.     bottom: "conv1"
  68.     top: "conv1"
  69.     name: "bn_conv1"
  70.     type: "BatchNorm"

  71. }

  72. layer {
  73.     bottom: "conv1"
  74.     top: "conv1"
  75.     name: "scale_conv1"
  76.     type: "Scale"
  77.     scale_param {
  78.         bias_term: true
  79.     }
  80. }

  81. layer {
  82.     bottom: "conv1"
  83.     top: "conv1"
  84.     name: "conv1_relu"
  85.     type: "ReLU"
  86. }

  87. layer {
  88.     bottom: "conv1"
  89.     top: "pool1"
  90.     name: "pool1"
  91.     type: "Pooling"
  92.     pooling_param {
  93.         kernel_size: 3
  94.         stride: 2
  95.         pool: MAX
  96.     }
  97. }

  98. layer {
  99.     bottom: "pool1"
  100.     top: "res2a_branch1"
  101.     name: "res2a_branch1"
  102.     type: "Convolution"
  103.     convolution_param {
  104.         num_output: 64
  105.         kernel_size: 1
  106.         pad: 0
  107.         stride: 1
  108.         weight_filler {
  109.             type: "msra"
  110.         }
  111.         bias_term: false

  112.     }
  113. }

  114. layer {
  115.     bottom: "res2a_branch1"
  116.     top: "res2a_branch1"
  117.     name: "bn2a_branch1"
  118.     type: "BatchNorm"

  119. }

  120. layer {
  121.     bottom: "res2a_branch1"
  122.     top: "res2a_branch1"
  123.     name: "scale2a_branch1"
  124.     type: "Scale"
  125.     scale_param {
  126.         bias_term: true
  127.     }
  128. }

  129. layer {
  130.     bottom: "pool1"
  131.     top: "res2a_branch2a"
  132.     name: "res2a_branch2a"
  133.     type: "Convolution"
  134.     convolution_param {
  135.         num_output: 64
  136.         kernel_size: 3
  137.         pad: 1
  138.         stride: 1
  139.         weight_filler {
  140.             type: "msra"
  141.         }
  142.         bias_term: false

  143.     }
  144. }

  145. layer {
  146.     bottom: "res2a_branch2a"
  147.     top: "res2a_branch2a"
  148.     name: "bn2a_branch2a"
  149.     type: "BatchNorm"

  150. }

  151. layer {
  152.     bottom: "res2a_branch2a"
  153.     top: "res2a_branch2a"
  154.     name: "scale2a_branch2a"
  155.     type: "Scale"
  156.     scale_param {
  157.         bias_term: true
  158.     }
  159. }

  160. layer {
  161.     bottom: "res2a_branch2a"
  162.     top: "res2a_branch2a"
  163.     name: "res2a_branch2a_relu"
  164.     type: "ReLU"
  165. }

  166. layer {
  167.     bottom: "res2a_branch2a"
  168.     top: "res2a_branch2b"
  169.     name: "res2a_branch2b"
  170.     type: "Convolution"
  171.     convolution_param {
  172.         num_output: 64
  173.         kernel_size: 3
  174.         pad: 1
  175.         stride: 1
  176.         weight_filler {
  177.             type: "msra"
  178.         }
  179.         bias_term: false

  180.     }
  181. }

  182. layer {
  183.     bottom: "res2a_branch2b"
  184.     top: "res2a_branch2b"
  185.     name: "bn2a_branch2b"
  186.     type: "BatchNorm"

  187. }

  188. layer {
  189.     bottom: "res2a_branch2b"
  190.     top: "res2a_branch2b"
  191.     name: "scale2a_branch2b"
  192.     type: "Scale"
  193.     scale_param {
  194.         bias_term: true
  195.     }
  196. }

  197. layer {
  198.     bottom: "res2a_branch1"
  199.     bottom: "res2a_branch2b"
  200.     top: "res2a"
  201.     name: "res2a"
  202.     type: "Eltwise"
  203.     eltwise_param {
  204.         operation: SUM
  205.     }
  206. }

  207. layer {
  208.     bottom: "res2a"
  209.     top: "res2a"
  210.     name: "res2a_relu"
  211.     type: "ReLU"
  212. }

  213. layer {
  214.     bottom: "res2a"
  215.     top: "res2b_branch2a"
  216.     name: "res2b_branch2a"
  217.     type: "Convolution"
  218.     convolution_param {
  219.         num_output: 64
  220.         kernel_size: 3
  221.         pad: 1
  222.         stride: 1
  223.         weight_filler {
  224.             type: "msra"
  225.         }
  226.         bias_term: false

  227.     }
  228. }

  229. layer {
  230.     bottom: "res2b_branch2a"
  231.     top: "res2b_branch2a"
  232.     name: "bn2b_branch2a"
  233.     type: "BatchNorm"

  234. }

  235. layer {
  236.     bottom: "res2b_branch2a"
  237.     top: "res2b_branch2a"
  238.     name: "scale2b_branch2a"
  239.     type: "Scale"
  240.     scale_param {
  241.         bias_term: true
  242.     }
  243. }

  244. layer {
  245.     bottom: "res2b_branch2a"
  246.     top: "res2b_branch2a"
  247.     name: "res2b_branch2a_relu"
  248.     type: "ReLU"
  249. }

  250. layer {
  251.     bottom: "res2b_branch2a"
  252.     top: "res2b_branch2b"
  253.     name: "res2b_branch2b"
  254.     type: "Convolution"
  255.     convolution_param {
  256.         num_output: 64
  257.         kernel_size: 3
  258.         pad: 1
  259.         stride: 1
  260.         weight_filler {
  261.             type: "msra"
  262.         }
  263.         bias_term: false

  264.     }
  265. }

  266. layer {
  267.     bottom: "res2b_branch2b"
  268.     top: "res2b_branch2b"
  269.     name: "bn2b_branch2b"
  270.     type: "BatchNorm"

  271. }

  272. layer {
  273.     bottom: "res2b_branch2b"
  274.     top: "res2b_branch2b"
  275.     name: "scale2b_branch2b"
  276.     type: "Scale"
  277.     scale_param {
  278.         bias_term: true
  279.     }
  280. }

  281. layer {
  282.     bottom: "res2a"
  283.     bottom: "res2b_branch2b"
  284.     top: "res2b"
  285.     name: "res2b"
  286.     type: "Eltwise"
  287.     eltwise_param {
  288.         operation: SUM
  289.     }
  290. }

  291. layer {
  292.     bottom: "res2b"
  293.     top: "res2b"
  294.     name: "res2b_relu"
  295.     type: "ReLU"
  296. }

  297. layer {
  298.     bottom: "res2b"
  299.     top: "res3a_branch1"
  300.     name: "res3a_branch1"
  301.     type: "Convolution"
  302.     convolution_param {
  303.         num_output: 128
  304.         kernel_size: 1
  305.         pad: 0
  306.         stride: 2
  307.         weight_filler {
  308.             type: "msra"
  309.         }
  310.         bias_term: false

  311.     }
  312. }

  313. layer {
  314.     bottom: "res3a_branch1"
  315.     top: "res3a_branch1"
  316.     name: "bn3a_branch1"
  317.     type: "BatchNorm"

  318. }

  319. layer {
  320.     bottom: "res3a_branch1"
  321.     top: "res3a_branch1"
  322.     name: "scale3a_branch1"
  323.     type: "Scale"
  324.     scale_param {
  325.         bias_term: true
  326.     }
  327. }

  328. layer {
  329.     bottom: "res2b"
  330.     top: "res3a_branch2a"
  331.     name: "res3a_branch2a"
  332.     type: "Convolution"
  333.     convolution_param {
  334.         num_output: 128
  335.         kernel_size: 3
  336.         pad: 1
  337.         stride: 2
  338.         weight_filler {
  339.             type: "msra"
  340.         }
  341.         bias_term: false

  342.     }
  343. }

  344. layer {
  345.     bottom: "res3a_branch2a"
  346.     top: "res3a_branch2a"
  347.     name: "bn3a_branch2a"
  348.     type: "BatchNorm"

  349. }

  350. layer {
  351.     bottom: "res3a_branch2a"
  352.     top: "res3a_branch2a"
  353.     name: "scale3a_branch2a"
  354.     type: "Scale"
  355.     scale_param {
  356.         bias_term: true
  357.     }
  358. }

  359. layer {
  360.     bottom: "res3a_branch2a"
  361.     top: "res3a_branch2a"
  362.     name: "res3a_branch2a_relu"
  363.     type: "ReLU"
  364. }

  365. layer {
  366.     bottom: "res3a_branch2a"
  367.     top: "res3a_branch2b"
  368.     name: "res3a_branch2b"
  369.     type: "Convolution"
  370.     convolution_param {
  371.         num_output: 128
  372.         kernel_size: 3
  373.         pad: 1
  374.         stride: 1
  375.         weight_filler {
  376.             type: "msra"
  377.         }
  378.         bias_term: false

  379.     }
  380. }

  381. layer {
  382.     bottom: "res3a_branch2b"
  383.     top: "res3a_branch2b"
  384.     name: "bn3a_branch2b"
  385.     type: "BatchNorm"

  386. }

  387. layer {
  388.     bottom: "res3a_branch2b"
  389.     top: "res3a_branch2b"
  390.     name: "scale3a_branch2b"
  391.     type: "Scale"
  392.     scale_param {
  393.         bias_term: true
  394.     }
  395. }

  396. layer {
  397.     bottom: "res3a_branch1"
  398.     bottom: "res3a_branch2b"
  399.     top: "res3a"
  400.     name: "res3a"
  401.     type: "Eltwise"
  402.     eltwise_param {
  403.         operation: SUM
  404.     }
  405. }

  406. layer {
  407.     bottom: "res3a"
  408.     top: "res3a"
  409.     name: "res3a_relu"
  410.     type: "ReLU"
  411. }

  412. layer {
  413.     bottom: "res3a"
  414.     top: "res3b_branch2a"
  415.     name: "res3b_branch2a"
  416.     type: "Convolution"
  417.     convolution_param {
  418.         num_output: 128
  419.         kernel_size: 3
  420.         pad: 1
  421.         stride: 1
  422.         weight_filler {
  423.             type: "msra"
  424.         }
  425.         bias_term: false

  426.     }
  427. }

  428. layer {
  429.     bottom: "res3b_branch2a"
  430.     top: "res3b_branch2a"
  431.     name: "bn3b_branch2a"
  432.     type: "BatchNorm"

  433. }

  434. layer {
  435.     bottom: "res3b_branch2a"
  436.     top: "res3b_branch2a"
  437.     name: "scale3b_branch2a"
  438.     type: "Scale"
  439.     scale_param {
  440.         bias_term: true
  441.     }
  442. }

  443. layer {
  444.     bottom: "res3b_branch2a"
  445.     top: "res3b_branch2a"
  446.     name: "res3b_branch2a_relu"
  447.     type: "ReLU"
  448. }

  449. layer {
  450.     bottom: "res3b_branch2a"
  451.     top: "res3b_branch2b"
  452.     name: "res3b_branch2b"
  453.     type: "Convolution"
  454.     convolution_param {
  455.         num_output: 128
  456.         kernel_size: 3
  457.         pad: 1
  458.         stride: 1
  459.         weight_filler {
  460.             type: "msra"
  461.         }
  462.         bias_term: false

  463.     }
  464. }

  465. layer {
  466.     bottom: "res3b_branch2b"
  467.     top: "res3b_branch2b"
  468.     name: "bn3b_branch2b"
  469.     type: "BatchNorm"

  470. }

  471. layer {
  472.     bottom: "res3b_branch2b"
  473.     top: "res3b_branch2b"
  474.     name: "scale3b_branch2b"
  475.     type: "Scale"
  476.     scale_param {
  477.         bias_term: true
  478.     }
  479. }

  480. layer {
  481.     bottom: "res3a"
  482.     bottom: "res3b_branch2b"
  483.     top: "res3b"
  484.     name: "res3b"
  485.     type: "Eltwise"
  486.     eltwise_param {
  487.         operation: SUM
  488.     }
  489. }

  490. layer {
  491.     bottom: "res3b"
  492.     top: "res3b"
  493.     name: "res3b_relu"
  494.     type: "ReLU"
  495. }

  496. layer {
  497.     bottom: "res3b"
  498.     top: "res4a_branch1"
  499.     name: "res4a_branch1"
  500.     type: "Convolution"
  501.     convolution_param {
  502.         num_output: 256
  503.         kernel_size: 1
  504.         pad: 0
  505.         stride: 2
  506.         weight_filler {
  507.             type: "msra"
  508.         }
  509.         bias_term: false

  510.     }
  511. }

  512. layer {
  513.     bottom: "res4a_branch1"
  514.     top: "res4a_branch1"
  515.     name: "bn4a_branch1"
  516.     type: "BatchNorm"

  517. }

  518. layer {
  519.     bottom: "res4a_branch1"
  520.     top: "res4a_branch1"
  521.     name: "scale4a_branch1"
  522.     type: "Scale"
  523.     scale_param {
  524.         bias_term: true
  525.     }
  526. }

  527. layer {
  528.     bottom: "res3b"
  529.     top: "res4a_branch2a"
  530.     name: "res4a_branch2a"
  531.     type: "Convolution"
  532.     convolution_param {
  533.         num_output: 256
  534.         kernel_size: 3
  535.         pad: 1
  536.         stride: 2
  537.         weight_filler {
  538.             type: "msra"
  539.         }
  540.         bias_term: false

  541.     }
  542. }

  543. layer {
  544.     bottom: "res4a_branch2a"
  545.     top: "res4a_branch2a"
  546.     name: "bn4a_branch2a"
  547.     type: "BatchNorm"

  548. }

  549. layer {
  550.     bottom: "res4a_branch2a"
  551.     top: "res4a_branch2a"
  552.     name: "scale4a_branch2a"
  553.     type: "Scale"
  554.     scale_param {
  555.         bias_term: true
  556.     }
  557. }

  558. layer {
  559.     bottom: "res4a_branch2a"
  560.     top: "res4a_branch2a"
  561.     name: "res4a_branch2a_relu"
  562.     type: "ReLU"
  563. }

  564. layer {
  565.     bottom: "res4a_branch2a"
  566.     top: "res4a_branch2b"
  567.     name: "res4a_branch2b"
  568.     type: "Convolution"
  569.     convolution_param {
  570.         num_output: 256
  571.         kernel_size: 3
  572.         pad: 1
  573.         stride: 1
  574.         weight_filler {
  575.             type: "msra"
  576.         }
  577.         bias_term: false

  578.     }
  579. }

  580. layer {
  581.     bottom: "res4a_branch2b"
  582.     top: "res4a_branch2b"
  583.     name: "bn4a_branch2b"
  584.     type: "BatchNorm"

  585. }

  586. layer {
  587.     bottom: "res4a_branch2b"
  588.     top: "res4a_branch2b"
  589.     name: "scale4a_branch2b"
  590.     type: "Scale"
  591.     scale_param {
  592.         bias_term: true
  593.     }
  594. }

  595. layer {
  596.     bottom: "res4a_branch1"
  597.     bottom: "res4a_branch2b"
  598.     top: "res4a"
  599.     name: "res4a"
  600.     type: "Eltwise"
  601.     eltwise_param {
  602.         operation: SUM
  603.     }
  604. }

  605. layer {
  606.     bottom: "res4a"
  607.     top: "res4a"
  608.     name: "res4a_relu"
  609.     type: "ReLU"
  610. }

  611. layer {
  612.     bottom: "res4a"
  613.     top: "res4b_branch2a"
  614.     name: "res4b_branch2a"
  615.     type: "Convolution"
  616.     convolution_param {
  617.         num_output: 256
  618.         kernel_size: 3
  619.         pad: 1
  620.         stride: 1
  621.         weight_filler {
  622.             type: "msra"
  623.         }
  624.         bias_term: false

  625.     }
  626. }

  627. layer {
  628.     bottom: "res4b_branch2a"
  629.     top: "res4b_branch2a"
  630.     name: "bn4b_branch2a"
  631.     type: "BatchNorm"

  632. }

  633. layer {
  634.     bottom: "res4b_branch2a"
  635.     top: "res4b_branch2a"
  636.     name: "scale4b_branch2a"
  637.     type: "Scale"
  638.     scale_param {
  639.         bias_term: true
  640.     }
  641. }

  642. layer {
  643.     bottom: "res4b_branch2a"
  644.     top: "res4b_branch2a"
  645.     name: "res4b_branch2a_relu"
  646.     type: "ReLU"
  647. }

  648. layer {
  649.     bottom: "res4b_branch2a"
  650.     top: "res4b_branch2b"
  651.     name: "res4b_branch2b"
  652.     type: "Convolution"
  653.     convolution_param {
  654.         num_output: 256
  655.         kernel_size: 3
  656.         pad: 1
  657.         stride: 1
  658.         weight_filler {
  659.             type: "msra"
  660.         }
  661.         bias_term: false

  662.     }
  663. }

  664. layer {
  665.     bottom: "res4b_branch2b"
  666.     top: "res4b_branch2b"
  667.     name: "bn4b_branch2b"
  668.     type: "BatchNorm"

  669. }

  670. layer {
  671.     bottom: "res4b_branch2b"
  672.     top: "res4b_branch2b"
  673.     name: "scale4b_branch2b"
  674.     type: "Scale"
  675.     scale_param {
  676.         bias_term: true
  677.     }
  678. }

  679. layer {
  680.     bottom: "res4a"
  681.     bottom: "res4b_branch2b"
  682.     top: "res4b"
  683.     name: "res4b"
  684.     type: "Eltwise"
  685.     eltwise_param {
  686.         operation: SUM
  687.     }
  688. }

  689. layer {
  690.     bottom: "res4b"
  691.     top: "res4b"
  692.     name: "res4b_relu"
  693.     type: "ReLU"
  694. }

  695. layer {
  696.     bottom: "res4b"
  697.     top: "res5a_branch1"
  698.     name: "res5a_branch1"
  699.     type: "Convolution"
  700.     convolution_param {
  701.         num_output: 512
  702.         kernel_size: 1
  703.         pad: 0
  704.         stride: 2
  705.         weight_filler {
  706.             type: "msra"
  707.         }
  708.         bias_term: false

  709.     }
  710. }

  711. layer {
  712.     bottom: "res5a_branch1"
  713.     top: "res5a_branch1"
  714.     name: "bn5a_branch1"
  715.     type: "BatchNorm"

  716. }

  717. layer {
  718.     bottom: "res5a_branch1"
  719.     top: "res5a_branch1"
  720.     name: "scale5a_branch1"
  721.     type: "Scale"
  722.     scale_param {
  723.         bias_term: true
  724.     }
  725. }

  726. layer {
  727.     bottom: "res4b"
  728.     top: "res5a_branch2a"
  729.     name: "res5a_branch2a"
  730.     type: "Convolution"
  731.     convolution_param {
  732.         num_output: 512
  733.         kernel_size: 3
  734.         pad: 1
  735.         stride: 2
  736.         weight_filler {
  737.             type: "msra"
  738.         }
  739.         bias_term: false

  740.     }
  741. }

  742. layer {
  743.     bottom: "res5a_branch2a"
  744.     top: "res5a_branch2a"
  745.     name: "bn5a_branch2a"
  746.     type: "BatchNorm"

  747. }

  748. layer {
  749.     bottom: "res5a_branch2a"
  750.     top: "res5a_branch2a"
  751.     name: "scale5a_branch2a"
  752.     type: "Scale"
  753.     scale_param {
  754.         bias_term: true
  755.     }
  756. }

  757. layer {
  758.     bottom: "res5a_branch2a"
  759.     top: "res5a_branch2a"
  760.     name: "res5a_branch2a_relu"
  761.     type: "ReLU"
  762. }

  763. layer {
  764.     bottom: "res5a_branch2a"
  765.     top: "res5a_branch2b"
  766.     name: "res5a_branch2b"
  767.     type: "Convolution"
  768.     convolution_param {
  769.         num_output: 512
  770.         kernel_size: 3
  771.         pad: 1
  772.         stride: 1
  773.         weight_filler {
  774.             type: "msra"
  775.         }
  776.         bias_term: false

  777.     }
  778. }

  779. layer {
  780.     bottom: "res5a_branch2b"
  781.     top: "res5a_branch2b"
  782.     name: "bn5a_branch2b"
  783.     type: "BatchNorm"

  784. }

  785. layer {
  786.     bottom: "res5a_branch2b"
  787.     top: "res5a_branch2b"
  788.     name: "scale5a_branch2b"
  789.     type: "Scale"
  790.     scale_param {
  791.         bias_term: true
  792.     }
  793. }

  794. layer {
  795.     bottom: "res5a_branch1"
  796.     bottom: "res5a_branch2b"
  797.     top: "res5a"
  798.     name: "res5a"
  799.     type: "Eltwise"
  800.     eltwise_param {
  801.         operation: SUM
  802.     }
  803. }

  804. layer {
  805.     bottom: "res5a"
  806.     top: "res5a"
  807.     name: "res5a_relu"
  808.     type: "ReLU"
  809. }

  810. layer {
  811.     bottom: "res5a"
  812.     top: "res5b_branch2a"
  813.     name: "res5b_branch2a"
  814.     type: "Convolution"
  815.     convolution_param {
  816.         num_output: 512
  817.         kernel_size: 3
  818.         pad: 1
  819.         stride: 1
  820.         weight_filler {
  821.             type: "msra"
  822.         }
  823.         bias_term: false

  824.     }
  825. }

  826. layer {
  827.     bottom: "res5b_branch2a"
  828.     top: "res5b_branch2a"
  829.     name: "bn5b_branch2a"
  830.     type: "BatchNorm"

  831. }

  832. layer {
  833.     bottom: "res5b_branch2a"
  834.     top: "res5b_branch2a"
  835.     name: "scale5b_branch2a"
  836.     type: "Scale"
  837.     scale_param {
  838.         bias_term: true
  839.     }
  840. }

  841. layer {
  842.     bottom: "res5b_branch2a"
  843.     top: "res5b_branch2a"
  844.     name: "res5b_branch2a_relu"
  845.     type: "ReLU"
  846. }

  847. layer {
  848.     bottom: "res5b_branch2a"
  849.     top: "res5b_branch2b"
  850.     name: "res5b_branch2b"
  851.     type: "Convolution"
  852.     convolution_param {
  853.         num_output: 512
  854.         kernel_size: 3
  855.         pad: 1
  856.         stride: 1
  857.         weight_filler {
  858.             type: "msra"
  859.         }
  860.         bias_term: false

  861.     }
  862. }

  863. layer {
  864.     bottom: "res5b_branch2b"
  865.     top: "res5b_branch2b"
  866.     name: "bn5b_branch2b"
  867.     type: "BatchNorm"

  868. }

  869. layer {
  870.     bottom: "res5b_branch2b"
  871.     top: "res5b_branch2b"
  872.     name: "scale5b_branch2b"
  873.     type: "Scale"
  874.     scale_param {
  875.         bias_term: true
  876.     }
  877. }

  878. layer {
  879.     bottom: "res5a"
  880.     bottom: "res5b_branch2b"
  881.     top: "res5b"
  882.     name: "res5b"
  883.     type: "Eltwise"
  884.     eltwise_param {
  885.         operation: SUM
  886.     }
  887. }

  888. layer {
  889.     bottom: "res5b"
  890.     top: "res5b"
  891.     name: "res5b_relu"
  892.     type: "ReLU"
  893. }

  894. layer {
  895.     bottom: "res5b"
  896.     top: "pool5"
  897.     name: "pool5"
  898.     type: "Pooling"
  899.     pooling_param {
  900.         kernel_size: 7
  901.         stride: 1
  902.         pool: AVE
  903.     }
  904. }

  905. ###loss1
  906. layer {
  907.     bottom: "pool5"
  908.     top: "fc10001"
  909.     name: "fc10001"
  910.     type: "InnerProduct"
  911.     param {
  912.         lr_mult: 1
  913.         decay_mult: 1
  914.     }
  915.     param {
  916.         lr_mult: 2
  917.         decay_mult: 1
  918.     }
  919.     inner_product_param {
  920.         num_output: 37
  921.         weight_filler {
  922.             type: "xavier"
  923.         }
  924.         bias_filler {
  925.             type: "constant"
  926.             value: 0
  927.         }
  928.     }
  929. }

  930. layer {
  931.     bottom: "fc10001"
  932.     bottom: "label"
  933.     name: "loss1"
  934.     type: "SoftmaxWithLoss"
  935.     top: "loss1"
  936. }

  937. layer {
  938.     bottom: "fc10001"
  939.     bottom: "label"
  940.     top: "acc/top-1"
  941.     name: "acc/top-1"
  942.     type: "Accuracy"
  943.     include {
  944.         phase: TEST
  945.     }
  946. }

  947. layer {
  948.     bottom: "fc10001"
  949.     bottom: "label"
  950.     top: "acc/top-5"
  951.     name: "acc/top-5"
  952.     type: "Accuracy"
  953.     include {
  954.         phase: TEST
  955.     }
  956.     accuracy_param {
  957.         top_k: 37
  958.     }
  959. }
  960. layer {
  961.     bottom: "pool5"
  962.     top: "fc10002"
  963.     name: "fc10002"
  964.     type: "InnerProduct"
  965.     param {
  966.         lr_mult: 1
  967.         decay_mult: 1
  968.     }
  969.     param {
  970.         lr_mult: 2
  971.         decay_mult: 1
  972.     }
  973.     inner_product_param {
  974.         num_output: 37
  975.         weight_filler {
  976.             type: "xavier"
  977.         }
  978.         bias_filler {
  979.             type: "constant"
  980.             value: 0
  981.         }
  982.     }
  983. }

  984. layer {
  985.     bottom: "fc10002"
  986.     bottom: "label"
  987.     name: "loss2"
  988.     type: "SoftmaxWithLoss"
  989.     top: "loss2"
  990. }

  991. layer {
  992.     bottom: "fc10002"
  993.     bottom: "label"
  994.     top: "acc/top-1"
  995.     name: "acc/top-1"
  996.     type: "Accuracy"
  997.     include {
  998.         phase: TEST
  999.     }
  1000. }

  1001. layer {
  1002.     bottom: "fc10002"
  1003.     bottom: "label"
  1004.     top: "acc/top-5"
  1005.     name: "acc/top-5"
  1006.     type: "Accuracy"
  1007.     include {
  1008.         phase: TEST
  1009.     }
  1010.     accuracy_param {
  1011.         top_k: 37
  1012.     }
  1013. } layer {
  1014.     bottom: "pool5"
  1015.     top: "fc10003"
  1016.     name: "fc10003"
  1017.     type: "InnerProduct"
  1018.     param {
  1019.         lr_mult: 1
  1020.         decay_mult: 1
  1021.     }
  1022.     param {
  1023.         lr_mult: 2
  1024.         decay_mult: 1
  1025.     }
  1026.     inner_product_param {
  1027.         num_output: 37
  1028.         weight_filler {
  1029.             type: "xavier"
  1030.         }
  1031.         bias_filler {
  1032.             type: "constant"
  1033.             value: 0
  1034.         }
  1035.     }
  1036. }

  1037. layer {
  1038.     bottom: "fc10003"
  1039.     bottom: "label"
  1040.     name: "loss3"
  1041.     type: "SoftmaxWithLoss"
  1042.     top: "loss3"
  1043. }

  1044. layer {
  1045.     bottom: "fc10003"
  1046.     bottom: "label"
  1047.     top: "acc/top-1"
  1048.     name: "acc/top-1"
  1049.     type: "Accuracy"
  1050.     include {
  1051.         phase: TEST
  1052.     }
  1053. }

  1054. layer {
  1055.     bottom: "fc10003"
  1056.     bottom: "label"
  1057.     top: "acc/top-37"
  1058.     name: "acc/top-37"
  1059.     type: "Accuracy"
  1060.     include {
  1061.         phase: TEST
  1062.     }
  1063.     accuracy_param {
  1064.         top_k: 37
  1065.     }
  1066. }
  1067. ##loss4
  1068. layer {
  1069.     bottom: "pool5"
  1070.     top: "fc10004"
  1071.     name: "fc10004"
  1072.     type: "InnerProduct"
  1073.     param {
  1074.         lr_mult: 1
  1075.         decay_mult: 1
  1076.     }
  1077.     param {
  1078.         lr_mult: 2
  1079.         decay_mult: 1
  1080.     }
  1081.     inner_product_param {
  1082.         num_output: 37
  1083.         weight_filler {
  1084.             type: "xavier"
  1085.         }
  1086.         bias_filler {
  1087.             type: "constant"
  1088.             value: 0
  1089.         }
  1090.     }
  1091. }

  1092. layer {
  1093.     bottom: "fc10004"
  1094.     bottom: "label"
  1095.     name: "loss4"
  1096.     type: "SoftmaxWithLoss"
  1097.     top: "loss4"
  1098. }

  1099. layer {
  1100.     bottom: "fc10004"
  1101.     bottom: "label"
  1102.     top: "acc/top-1"
  1103.     name: "acc/top-1"
  1104.     type: "Accuracy"
  1105.     include {
  1106.         phase: TEST
  1107.     }
  1108. }

  1109. layer {
  1110.     bottom: "fc10004"
  1111.     bottom: "label"
  1112.     top: "acc/top-5"
  1113.     name: "acc/top-5"
  1114.     type: "Accuracy"
  1115.     include {
  1116.         phase: TEST
  1117.     }
  1118.     accuracy_param {
  1119.         top_k: 37
  1120.     }
  1121. }
  1122. ###loss5
  1123. layer {
  1124.     bottom: "pool5"
  1125.     top: "fc10005"
  1126.     name: "fc10005"
  1127.     type: "InnerProduct"
  1128.     param {
  1129.         lr_mult: 1
  1130.         decay_mult: 1
  1131.     }
  1132.     param {
  1133.         lr_mult: 2
  1134.         decay_mult: 1
  1135.     }
  1136.     inner_product_param {
  1137.         num_output: 37
  1138.         weight_filler {
  1139.             type: "xavier"
  1140.         }
  1141.         bias_filler {
  1142.             type: "constant"
  1143.             value: 0
  1144.         }
  1145.     }
  1146. }

  1147. layer {
  1148.     bottom: "fc10005"
  1149.     bottom: "label"
  1150.     name: "loss5"
  1151.     type: "SoftmaxWithLoss"
  1152.     top: "loss5"
  1153. }

  1154. layer {
  1155.     bottom: "fc10005"
  1156.     bottom: "label"
  1157.     top: "acc/top-1"
  1158.     name: "acc/top-1"
  1159.     type: "Accuracy"
  1160.     include {
  1161.         phase: TEST
  1162.     }
  1163. }

  1164. layer {
  1165.     bottom: "fc10005"
  1166.     bottom: "label"
  1167.     top: "acc/top-5"
  1168.     name: "acc/top-5"
  1169.     type: "Accuracy"
  1170.     include {
  1171.         phase: TEST
  1172.     }
  1173.     accuracy_param {
  1174.         top_k: 37
  1175.     }
  1176. }
  1177. ###loss6
  1178. layer {
  1179.     bottom: "pool5"
  1180.     top: "fc10006"
  1181.     name: "fc10006"
  1182.     type: "InnerProduct"
  1183.     param {
  1184.         lr_mult: 1
  1185.         decay_mult: 1
  1186.     }
  1187.     param {
  1188.         lr_mult: 2
  1189.         decay_mult: 1
  1190.     }
  1191.     inner_product_param {
  1192.         num_output: 37
  1193.         weight_filler {
  1194.             type: "xavier"
  1195.         }
  1196.         bias_filler {
  1197.             type: "constant"
  1198.             value: 0
  1199.         }
  1200.     }
  1201. }

  1202. layer {
  1203.     bottom: "fc10006"
  1204.     bottom: "label"
  1205.     name: "loss6"
  1206.     type: "SoftmaxWithLoss"
  1207.     top: "loss6"
  1208. }

  1209. layer {
  1210.     bottom: "fc10006"
  1211.     bottom: "label"
  1212.     top: "acc/top-1"
  1213.     name: "acc/top-1"
  1214.     type: "Accuracy"
  1215.     include {
  1216.         phase: TEST
  1217.     }
  1218. }

  1219. layer {
  1220.     bottom: "fc10006"
  1221.     bottom: "label"
  1222.     top: "acc/top-6"
  1223.     name: "acc/top-6"
  1224.     type: "Accuracy"
  1225.     include {
  1226.         phase: TEST
  1227.     }
  1228.     accuracy_param {
  1229.         top_k: 37
  1230.     }
  1231. }
  1232. ###loss7
  1233. layer {
  1234.     bottom: "pool5"
  1235.     top: "fc10007"
  1236.     name: "fc10007"
  1237.     type: "InnerProduct"
  1238.     param {
  1239.         lr_mult: 1
  1240.         decay_mult: 1
  1241.     }
  1242.     param {
  1243.         lr_mult: 2
  1244.         decay_mult: 1
  1245.     }
  1246.     inner_product_param {
  1247.         num_output: 37
  1248.         weight_filler {
  1249.             type: "xavier"
  1250.         }
  1251.         bias_filler {
  1252.             type: "constant"
  1253.             value: 0
  1254.         }
  1255.     }
  1256. }

  1257. layer {
  1258.     bottom: "fc10007"
  1259.     bottom: "label"
  1260.     name: "loss7"
  1261.     type: "SoftmaxWithLoss"
  1262.     top: "loss7"
  1263. }

  1264. layer {
  1265.     bottom: "fc10007"
  1266.     bottom: "label"
  1267.     top: "acc/top-1"
  1268.     name: "acc/top-1"
  1269.     type: "Accuracy"
  1270.     include {
  1271.         phase: TEST
  1272.     }
  1273. }

  1274. layer {
  1275.     bottom: "fc10007"
  1276.     bottom: "label"
  1277.     top: "acc/top-6"
  1278.     name: "acc/top-6"
  1279.     type: "Accuracy"
  1280.     include {
  1281.         phase: TEST
  1282.     }
  1283.     accuracy_param {
  1284.         top_k: 37
  1285.     }
  1286. }
  1287. ###loss8
  1288. layer {
  1289.     bottom: "pool5"
  1290.     top: "fc10008"
  1291.     name: "fc10008"
  1292.     type: "InnerProduct"
  1293.     param {
  1294.         lr_mult: 1
  1295.         decay_mult: 1
  1296.     }
  1297.     param {
  1298.         lr_mult: 2
  1299.         decay_mult: 1
  1300.     }
  1301.     inner_product_param {
  1302.         num_output: 37
  1303.         weight_filler {
  1304.             type: "xavier"
  1305.         }
  1306.         bias_filler {
  1307.             type: "constant"
  1308.             value: 0
  1309.         }
  1310.     }
  1311. }

  1312. layer {
  1313.     bottom: "fc10008"
  1314.     bottom: "label"
  1315.     name: "loss8"
  1316.     type: "SoftmaxWithLoss"
  1317.     top: "loss8"
  1318. }

  1319. layer {
  1320.     bottom: "fc10008"
  1321.     bottom: "label"
  1322.     top: "acc/top-1"
  1323.     name: "acc/top-1"
  1324.     type: "Accuracy"
  1325.     include {
  1326.         phase: TEST
  1327.     }
  1328. }

  1329. layer {
  1330.     bottom: "fc10008"
  1331.     bottom: "label"
  1332.     top: "acc/top-6"
  1333.     name: "acc/top-6"
  1334.     type: "Accuracy"
  1335.     include {
  1336.         phase: TEST
  1337.     }
  1338.     accuracy_param {
  1339.         top_k: 37
  1340.     }
  1341. }
  1342. ###loss9
  1343. layer {
  1344.     bottom: "pool5"
  1345.     top: "fc10009"
  1346.     name: "fc10009"
  1347.     type: "InnerProduct"
  1348.     param {
  1349.         lr_mult: 1
  1350.         decay_mult: 1
  1351.     }
  1352.     param {
  1353.         lr_mult: 2
  1354.         decay_mult: 1
  1355.     }
  1356.     inner_product_param {
  1357.         num_output: 37
  1358.         weight_filler {
  1359.             type: "xavier"
  1360.         }
  1361.         bias_filler {
  1362.             type: "constant"
  1363.             value: 0
  1364.         }
  1365.     }
  1366. }

  1367. layer {
  1368.     bottom: "fc10009"
  1369.     bottom: "label"
  1370.     name: "loss9"
  1371.     type: "SoftmaxWithLoss"
  1372.     top: "loss9"
  1373. }

  1374. layer {
  1375.     bottom: "fc10009"
  1376.     bottom: "label"
  1377.     top: "acc/top-1"
  1378.     name: "acc/top-1"
  1379.     type: "Accuracy"
  1380.     include {
  1381.         phase: TEST
  1382.     }
  1383. }

  1384. layer {
  1385.     bottom: "fc10009"
  1386.     bottom: "label"
  1387.     top: "acc/top-6"
  1388.     name: "acc/top-6"
  1389.     type: "Accuracy"
  1390.     include {
  1391.         phase: TEST
  1392.     }
  1393.     accuracy_param {
  1394.         top_k: 37
  1395.     }
  1396. }
  1397. ###loss 10
  1398. layer {
  1399.     bottom: "pool5"
  1400.     top: "fc100010"
  1401.     name: "fc100010"
  1402.     type: "InnerProduct"
  1403.     param {
  1404.         lr_mult: 1
  1405.         decay_mult: 1
  1406.     }
  1407.     param {
  1408.         lr_mult: 2
  1409.         decay_mult: 1
  1410.     }
  1411.     inner_product_param {
  1412.         num_output: 37
  1413.         weight_filler {
  1414.             type: "xavier"
  1415.         }
  1416.         bias_filler {
  1417.             type: "constant"
  1418.             value: 0
  1419.         }
  1420.     }
  1421. }

  1422. layer {
  1423.     bottom: "fc100010"
  1424.     bottom: "label"
  1425.     name: "loss10"
  1426.     type: "SoftmaxWithLoss"
  1427.     top: "loss10"
  1428. }

  1429. layer {
  1430.     bottom: "fc100010"
  1431.     bottom: "label"
  1432.     top: "acc/top-1"
  1433.     name: "acc/top-1"
  1434.     type: "Accuracy"
  1435.     include {
  1436.         phase: TEST
  1437.     }
  1438. }

  1439. layer {
  1440.     bottom: "fc100010"
  1441.     bottom: "label"
  1442.     top: "acc/top-5"
  1443.     name: "acc/top-5"
  1444.     type: "Accuracy"
  1445.     include {
  1446.         phase: TEST
  1447.     }
  1448.     accuracy_param {
  1449.         top_k: 37
  1450.     }
  1451. }
复制代码

我已在论坛开源mbus验证码识别平台,如需下载源代码,请访问http://bbs.125.la/forum.php?mod=viewthread&tid=14172517&extra=
目前我已在自己的服务器上部署服务,可以为你提供免费识别服务,接入开发文档请访问http://139.199.211.96:8090/

本帖被以下淘专辑推荐:

  • · 鱼木|主题: 1560, 订阅: 152
发表于 2019-5-29 14:58:35 | 显示全部楼层   浙江省杭州市
您好,请问使用这个测试代码,怎么才能一次性验证多个验证码呢,一次识别两个就会报错,是为什么呢
回复 支持 反对

使用道具 举报

结帖率:0% (0/2)
发表于 2019-3-23 20:40:55 | 显示全部楼层   北京市北京市
牛逼。。。。。不过我也看不懂。
回复 支持 反对

使用道具 举报

发表于 2019-1-30 17:16:10 | 显示全部楼层   广东省深圳市
唔,在楼主的代码基础上添加了TensorBoard的支持,但是训练性能减半,预计训练速度是原速度两倍(应该是为了采集数据,准确率需要每次训练后执行),不过训练情况应该都可以可视化看了,然后请问一下楼主,我能开个新楼分享一下源码吗?QwQ
附上TensorBoard可视化图片:




补充内容 (2019-1-30 20:33):
改一下w训练速度是原速度的1.5倍左右,两倍是因为本人沙雕多加了一段重复代码导致的
回复 支持 反对

使用道具 举报

发表于 2018-6-6 17:41:25 | 显示全部楼层   福建省莆田市
看不懂搭建。。。
回复 支持 反对

使用道具 举报

签到天数: 22 天

发表于 2018-6-1 09:15:16 高大上手机用户 | 显示全部楼层   四川省凉山彝族自治州
已经接入,速度还行
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则 致发广告者

发布主题 收藏帖子 返回列表

sitemap| 易语言源码| 易语言教程| 易语言论坛| 易语言模块| 手机版| 广告投放| 精易论坛
拒绝任何人以任何形式在本论坛发表与中华人民共和国法律相抵触的言论,本站内容均为会员发表,并不代表精易立场!
论坛帖子内容仅用于技术交流学习和研究的目的,严禁用于非法目的,否则造成一切后果自负!如帖子内容侵害到你的权益,请联系我们!
防范网络诈骗,远离网络犯罪 违法和不良信息举报电话0663-3422125,QQ: 793400750,邮箱:wp@125.la
Powered by Discuz! X3.4 揭阳市揭东区精易科技有限公司 ( 粤ICP备12094385号-1) 粤公网安备 44522102000125 增值电信业务经营许可证 粤B2-20192173

快速回复 返回顶部 返回列表