博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
【深度学习】--GAN从入门到初始
阅读量:6215 次
发布时间:2019-06-21

本文共 8888 字,大约阅读时间需要 29 分钟。

一、前述

GAN,生成对抗网络,在2016年基本火爆深度学习,所有有必要学习一下。生成对抗网络直观的应用可以帮我们生成数据,图片。

二、具体

1、生活案例

比如假设真钱 r  

坏人定义为G  我们通过 G 给定一个噪音X 通过学习一组参数w 生成一个G(x),转换成一个真实的分布。 这就是生成,相当于造假钱。

警察定义为D 将G(x)和真钱r 分别输入给判别网络,能判别出真假,真钱判别为0,假钱判别为1 。这就是判别。

最后生成网络想让判别网络判别不出来什么是真实的,什么是假的。要想生成的更好,则判别的就必须更强。有些博弈的思想,只有你强了,我才更强!!。

2、数学案例

我们最后的希望。

 

 3、损失函数

4、代码案例

 流程:

为了使判别模型更好,所以我们额外训练一个D_pre网络,使得判别模型能够判别出哪些是0,哪些是1,训练完之后会得到一组w,b参数。这样我们在真正初始化判别模型D的时候就能根据之前的D_pre来进行初始化。

 代码:

import argparseimport numpy as npfrom scipy.stats import normimport tensorflow as tfimport matplotlib.pyplot as pltfrom matplotlib import animationimport seaborn as snssns.set(color_codes=True)  seed = 42np.random.seed(seed)tf.set_random_seed(seed)class DataDistribution(object):    def __init__(self):        self.mu = 4#均值        self.sigma = 0.5#标准差    def sample(self, N):        samples = np.random.normal(self.mu, self.sigma, N)        samples.sort()        return samplesclass GeneratorDistribution(object):#在生成模型额噪音点,初始化输入    def __init__(self, range):        self.range = range    def sample(self, N):        return np.linspace(-self.range, self.range, N) + \            np.random.random(N) * 0.01def linear(input, output_dim, scope=None, stddev=1.0):    norm = tf.random_normal_initializer(stddev=stddev)    const = tf.constant_initializer(0.0)    with tf.variable_scope(scope or 'linear'):        w = tf.get_variable('w', [input.get_shape()[1], output_dim], initializer=norm)        b = tf.get_variable('b', [output_dim], initializer=const)        return tf.matmul(input, w) + bdef generator(input, h_dim):    h0 = tf.nn.softplus(linear(input, h_dim, 'g0'))#12*1    h1 = linear(h0, 1, 'g1')    return h1#z最后的生成模型def discriminator(input, h_dim):    h0 = tf.tanh(linear(input, h_dim * 2, 'd0'))#linear 控制初始化参数    h1 = tf.tanh(linear(h0, h_dim * 2, 'd1'))       h2 = tf.tanh(linear(h1, h_dim * 2, scope='d2'))    h3 = tf.sigmoid(linear(h2, 1, scope='d3'))#最终的输出值 对判别网络输出    return h3def optimizer(loss, var_list, initial_learning_rate):    decay = 0.95    num_decay_steps = 150#没迭代150次 学习率衰减一次0.95-150*0.95    batch = tf.Variable(0)    learning_rate = tf.train.exponential_decay(        initial_learning_rate,        batch,        num_decay_steps,        decay,        staircase=True    )    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(        loss,        global_step=batch,        var_list=var_list    )    return optimizerclass GAN(object):    def __init__(self, data, gen, num_steps, batch_size, log_every):        self.data = data        self.gen = gen        self.num_steps = num_steps        self.batch_size = batch_size        self.log_every = log_every        self.mlp_hidden_size = 4#隐层神经元个数                self.learning_rate = 0.03#学习率        self._create_model()    def _create_model(self):        with tf.variable_scope('D_pre'):#构造D_pre模型骨架,预先训练,为了去初始化真正的判别模型            self.pre_input = tf.placeholder(tf.float32, shape=(self.batch_size, 1))            self.pre_labels = tf.placeholder(tf.float32, shape=(self.batch_size, 1))            D_pre = discriminator(self.pre_input, self.mlp_hidden_size)            self.pre_loss = tf.reduce_mean(tf.square(D_pre - self.pre_labels))            self.pre_opt = optimizer(self.pre_loss, None, self.learning_rate)        # This defines the generator network - it takes samples from a noise        # distribution as input, and passes them through an MLP.        with tf.variable_scope('Gen'):#生成模型            self.z = tf.placeholder(tf.float32, shape=(self.batch_size, 1))#噪音的输入            self.G = generator(self.z, self.mlp_hidden_size)#最后的生成结果        # The discriminator tries to tell the difference between samples from the        # true data distribution (self.x) and the generated samples (self.z).        #        # Here we create two copies of the discriminator network (that share parameters),        # as you cannot use the same network with different inputs in TensorFlow.        with tf.variable_scope('Disc') as scope:#判别模型 不光接受真实的数据 还要接受生成模型的判别            self.x = tf.placeholder(tf.float32, shape=(self.batch_size, 1))            self.D1 = discriminator(self.x, self.mlp_hidden_size)#真实的数据            scope.reuse_variables()#变量重用            self.D2 = discriminator(self.G, self.mlp_hidden_size)#生成的数据        # Define the loss for discriminator and generator networks (see the original        # paper for details), and create optimizers for both        self.loss_d = tf.reduce_mean(-tf.log(self.D1) - tf.log(1 - self.D2))#判别网络的损失函数        self.loss_g = tf.reduce_mean(-tf.log(self.D2))#生成网络的损失函数,希望其趋向于1        self.d_pre_params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='D_pre')        self.d_params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='Disc')        self.g_params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='Gen')        self.opt_d = optimizer(self.loss_d, self.d_params, self.learning_rate)        self.opt_g = optimizer(self.loss_g, self.g_params, self.learning_rate)    def train(self):        with tf.Session() as session:            tf.global_variables_initializer().run()            # pretraining discriminator            num_pretrain_steps = 1000#迭代次数,先训练D_pre ,先让其有一个比较好的初始化参数            for step in range(num_pretrain_steps):                d = (np.random.random(self.batch_size) - 0.5) * 10.0                labels = norm.pdf(d, loc=self.data.mu, scale=self.data.sigma)                pretrain_loss, _ = session.run([self.pre_loss, self.pre_opt], {
#相当于一次迭代 self.pre_input: np.reshape(d, (self.batch_size, 1)), self.pre_labels: np.reshape(labels, (self.batch_size, 1)) }) self.weightsD = session.run(self.d_pre_params)#相当于拿到之前的参数 # copy weights from pre-training over to new D network for i, v in enumerate(self.d_params): session.run(v.assign(self.weightsD[i]))#吧权重参数拷贝 for step in range(self.num_steps):#训练真正的生成对抗网络 # update discriminator x = self.data.sample(self.batch_size)#真实的数据 z = self.gen.sample(self.batch_size)#随意的数据,噪音点 loss_d, _ = session.run([self.loss_d, self.opt_d], {
#D两种输入真实,和生成的 self.x: np.reshape(x, (self.batch_size, 1)), self.z: np.reshape(z, (self.batch_size, 1)) }) # update generator z = self.gen.sample(self.batch_size)#G网络 loss_g, _ = session.run([self.loss_g, self.opt_g], { self.z: np.reshape(z, (self.batch_size, 1)) }) if step % self.log_every == 0: print('{}: {}\t{}'.format(step, loss_d, loss_g)) if step % 100 == 0 or step==0 or step == self.num_steps -1 : self._plot_distributions(session) def _samples(self, session, num_points=10000, num_bins=100): xs = np.linspace(-self.gen.range, self.gen.range, num_points) bins = np.linspace(-self.gen.range, self.gen.range, num_bins) # data distribution d = self.data.sample(num_points) pd, _ = np.histogram(d, bins=bins, density=True) # generated samples zs = np.linspace(-self.gen.range, self.gen.range, num_points) g = np.zeros((num_points, 1)) for i in range(num_points // self.batch_size): g[self.batch_size * i:self.batch_size * (i + 1)] = session.run(self.G, { self.z: np.reshape( zs[self.batch_size * i:self.batch_size * (i + 1)], (self.batch_size, 1) ) }) pg, _ = np.histogram(g, bins=bins, density=True) return pd, pg def _plot_distributions(self, session): pd, pg = self._samples(session) p_x = np.linspace(-self.gen.range, self.gen.range, len(pd)) f, ax = plt.subplots(1) ax.set_ylim(0, 1) plt.plot(p_x, pd, label='real data') plt.plot(p_x, pg, label='generated data') plt.title('1D Generative Adversarial Network') plt.xlabel('Data values') plt.ylabel('Probability density') plt.legend() plt.show()def main(args): model = GAN( DataDistribution(), GeneratorDistribution(range=8), args.num_steps, args.batch_size, args.log_every, ) model.train()def parse_args(): parser = argparse.ArgumentParser() parser.add_argument('--num-steps', type=int, default=1200, help='the number of training steps to take') parser.add_argument('--batch-size', type=int, default=12, help='the batch size') parser.add_argument('--log-every', type=int, default=10, help='print loss after this many steps') return parser.parse_args()if __name__ == '__main__': main(parse_args())

 

 结果:

迭代到最后时候可以看到结果越来越类似。

 

转载于:https://www.cnblogs.com/LHWorldBlog/p/9250419.html

你可能感兴趣的文章
笨兔兔的故事——带你了解Ubuntu,了解Linux 第二章 醒来
查看>>
spark sql简单示例
查看>>
CSRF防范简介
查看>>
关于HTTP keep-alive的实验
查看>>
前嗅ForeSpider脚本教程:字段取值脚本
查看>>
分析NTFS文件系统内部结构
查看>>
android程序加载so动态库和jar包
查看>>
nagios插件性能数据显示格式
查看>>
创建临时表的语法
查看>>
数学相关
查看>>
深入理解Magento – 第三章 – 布局,块和模板
查看>>
外来键简介和sql语句
查看>>
Yii用ajax实现无刷新检索更新CListView数据
查看>>
LNMP--Nginx不记录指定文件日志
查看>>
在线聊天系统
查看>>
SQL Server 命名实例改为默认实例
查看>>
浅谈用户行为分析之用户身份识别:cookie 知多少?
查看>>
Java 下的 JSON库性能比较:JSON.simple vs. GSON vs. Jackson
查看>>
Log4j2 日志性能之巅
查看>>
我的友情链接
查看>>