智汇书屋 -绘画心理分析 揭开图画背后的秘密
本书资料更新时间:2025-01-09 19:32:35

绘画心理分析 揭开图画背后的秘密 下载 pdf 电子版 epub 免费 txt 2025

绘画心理分析 揭开图画背后的秘密精美图片
》绘画心理分析 揭开图画背后的秘密电子书籍版权问题 请点击这里查看《

绘画心理分析 揭开图画背后的秘密书籍详细信息

  • ISBN:9787561562475
  • 作者:暂无作者
  • 出版社:暂无出版社
  • 出版时间:2016-11
  • 页数:272
  • 价格:39
  • 纸张:暂无纸张
  • 装帧:暂无装帧
  • 开本:暂无开本
  • 语言:未知
  • 丛书:暂无丛书
  • TAG:暂无
  • 豆瓣评分:暂无豆瓣评分
  • 豆瓣短评:点击查看
  • 豆瓣讨论:点击查看
  • 豆瓣目录:点击查看
  • 读书笔记:点击查看
  • 原文摘录:点击查看
  • 更新时间:2025-01-09 19:32:35

内容简介:

本书是一部阐述绘画分析理论及实际操作的专著。本书介绍了绘画治疗的理论基础,以房、树、人,以及家庭为线索,阐述了绘画心理分析的过程,并结合理论讨论了许多生动的个案分析。


书籍目录:

第一章 我们的秘密

第二章 走近内心秘密的方式

第三章 寻求画里自我的奥秘

第四章 从“自画像”了解——貌似其心

第五章 从“房子”了解——心归何处

第六章 从“树”了解——生命的符手

第七章 从“家庭”了解——情感的密码

第八章 主题画分析

参考文献

后记——写给探秘者的几句话


作者介绍:

暂无相关内容,正在全力查找中


出版社信息:

暂无出版社相关信息,正在全力查找中!


书籍摘录:

暂无相关书籍摘录,正在全力查找中!



原文赏析:

暂无原文赏析,正在全力查找中!


其它内容:

书籍介绍

本书是一部阐述绘画分析理论及实际操作的专著。本书介绍了绘画治疗的理论基础,以房、树、人,以及家庭为线索,阐述了绘画心理分析的过程,并结合理论讨论了许多生动的个案分析。


精彩短评:

  • 作者:别林 发布时间:2012-05-06 14:22:45

    学会理财,对女人来说很重要

  • 作者:冰水123 发布时间:2013-02-28 22:13:35

    切斯,奇怪的名字。

    很多时候就是无能为力。就像看着一个醉酒的人驾驶着一辆车,眨眼间就撞到了路边,就那么一霎那,你觉得自己无能为力

  • 作者:冲鸭 发布时间:2019-05-24 13:26:48

    发票是保证国家收入的经济行为。交通、建筑税率是3%,而娱乐业可达20%。除三税外,印花、房产、车船等小税种也很重要。营改增是完善流转税的支付。

  • 作者:希夷读书地 发布时间:2018-12-09 18:42:54

    就是绘画心理的案例分析了。

  • 作者:维多 发布时间:2019-11-06 15:03:28

    11/6。

  • 作者:油画邵爷先生 发布时间:2021-02-01 20:07:36

    说实话,都是作者自己的见解之词,不能用于绘画专业领域。


深度书评:

  • 本书由很多瑕疵,真的!!

    作者:科里昂 刊 发布时间:2010-02-25 23:37:05

  • 英文章节摘要 Github搬运

    作者: 发布时间:2018-06-20 09:35:11

    因为翻译不忍直视,搬运一份Github上的笔记来,

    原文在此

    。另注:因为原文中有些条目不是很连贯,有些摘抄没有什么意思,所以略有删节。

    Prologue

    Chapter 1: The Machine Learning Revolution

    Chapter 2: The Master Algorithm

    Chapter 3: Hume's Problem of Induction

    Chapter 4: How Does Your Brain Learn?

    Chapter 5: Evolution: Nature's Learning Algorithm

    Chapter 6: In the Church of Reverend Bayes

    Chapter 7: You Are What You Resemble

    Chapter 8: Learnning Without a Teacher

    Chapter 9: The Pieces Of The Puzzle Fall Into Place

    Chapter 10: This Is The World Of Machine Learning

    Prologue

    But learning algorithms are artifacts that design other artifacts.

    Symbolists

    view learning as the inverse of

    deduction

    (推理) and take ideas from philosophy, psychology, and logic.

    Connectionists

    reverse engineer the brain and are inspired by neuroscience and physics.

    Evolutionaries

    simulate evolution on the computer and

    draw on

    (利用) genetics and evolutionary biology.

    Bayesians

    believe learning is a form of probabilistic inference and have their roots in statistics.

    Analogizers

    (类推学派) learn by

    extrapolating

    (推断) from similarity judgments and are influenced by psychology and mathematical optimization.

    On the contrary, what it requires is stepping back from the mathematical

    arcana

    (奥秘) to see the

    overarching

    (能解释一切的) pattern of learning phenomena; and for this the

    layman

    (外行人), approaching the forest from a distance, is in some ways better placed than the specialist, already deeply immersed in the study of particular trees.

    Chapter 1: The Machine Learning Revolution

    Scientists make theories, and engineers make devices. Computer scientists make algorithms, which are both theories and devices.

    Learning algorithms are the seeds, data is the soil, and the learned programs are the grown plants.

    The Industrial Revolution automated manual work and the Information Revolution did the same for mental work, but machine learning automates automation itself. Without it, programmers become the bottleneck

    holding up

    (妨碍) progress.

    In

    retrospect

    (回顾), we can see that the progression from computers to the Internet to machine learning was inevitable: computers enable the Internet, which creates a flood of data and the problem of limitless choice; and machine learning uses the flood of data to help solve the limitless choice problem.

    Chapter 2: The Master Algorithm

    All knowledge—past, present, and future—can be derived from data by a single, universal learning algorithm, which is called Master Algorithm.

    Thus it seems that evolution kept the

    cerebellum

    (小脑) around not because it does something the

    cortex

    (皮层) can't, but just because it's more efficient.

    If something exists but the brain can't learn it, we don't know it exists. We may just not see it or think it's random.

    But if everything we experience is the product of a few simple laws, then it makes sense that a single algorithm can induce all that can be induced.

    Biology, in turn, is the result of optimization by evolution within the constraints of physics and chemistry,

    Humans are good at solving NP problems approximately, and conversely, problems that we find interesting (like Tetris) often have an "NP-ness" about them.

    To use a technology, we don't need to master its inner workings, but we do need to have a good conceptual model of it.

    The analogizers' master algorithm is the support vector machine, which figures out which experiences to remember and how to combine them to make new predictions.

    Chapter 3: Hume's Problem of Induction

    The rationalist likes to plan everything in advance before making the first move. The empiricist prefers to try things and see how they turn out.

    You could be super-

    Casanova

    (花花公子) and have dated millions of women thousands of times each, but your master database still wouldn't answer the question of what this woman is going to say this time. How about we just assume that the future will be like the past? This is certainly a risky assumption. (It didn't work for the inductivist turkey.) On the other hand, without it all knowledge is impossible, and so is life.

    Result, known as the "no free lunch" theorem, sets a limit on how good a learner can be. The limit is pretty low:

    No learner can be better than random guessing!

    Now,you have the "no free lunch" theorem. Pick your favorite learner. For every world where it does better than random guessing, I will

    deviously

    (狡猾地)

    construct

    (构造) one where it does worse by the same amount. All I have to do is flip the labels of all unseen instances.

    Tom Mitchell, a leading symbolist, calls it "the

    futility

    (无用性) of bias-free learning." In ordinary life, bias is a

    pejorative

    (贬义的) word:

    preconceived

    (预先形成的) notions are bad. But in machine learning, preconceived notions are indispensable; you can't learn without them. In fact, preconceived notions are also indispensable to human cognition, but they're hardwired into the brain, and we take them for granted. It's biases over and beyond those that are questionable.

    Learning is forgetting the details as much as it is remembering the important parts. Learning is a race between the amount of data you have and the number of hypotheses you consider. More data

    exponentially

    (指数级的) reduces the number of hypotheses that survive, but if you start with a lot of them, you may still have some bad ones left at the end.

    Accuracy on

    held-out data

    (留存数据,是我们通常的 validation set) is the gold standard in machine learning.

    For example, we can subtract a

    penalty

    (惩罚) proportional to the length of the rule from its accuracy and use that as an evaluation measure.

    The preference for simpler hypotheses is popularly known as

    Occam's razor

    (奥卡姆剃刀,简单性原则), but in a machine-learning context this is somewhat misleading. "Entities should not be multiplied beyond necessity," as the razor is often paraphrased, just means choosing the simplest theory that fits the data.

    You can estimate the

    bias and variance

    of a learner by comparing its predictions after learning on random variations of the training set. If it keeps making the same mistakes, the problem is

    bias

    , and you need a more flexible learner (or just a different one). If there's no pattern to the mistakes, the problem is

    variance

    , and you want to either try a less flexible learner or get more data.

    For each pair of facts, we construct the rule that allows us to infer the second fact from the first one and generalize it by Newton's principle. When the same general rule is induced over and over again, we can have some confidence that it's true.

    This contrasts with traditional chemotherapy, which affects all cells

    indiscriminately

    (不加区分的). Learning which drugs work against which

    mutations

    (异变) requires a database of patients, their cancers' genomes, the drugs tried, and the outcomes. For these, the symbolist algorithm of choice is

    decision tree induction

    . Decision trees instead ensure a priori that each instance will be matched by exactly one rule.

    A single concept implicitly defines two classes: the concept itself and its

    negation

    (否定). (For example, spam and nonspam.) Classifiers are the most widespread form of machine learning.

    So to learn a good decision tree, we pick at each node the attribute that on average yields the lowest class entropy across all its branches, weighted by how many examples go into each branch.

    The psychologist David Marr argued that every information processing system should be studied at three distinct levels: the fundamental properties of the problem it's solving; the algorithms and representations used to solve it; and how they are physically implemented.

    Sets of rules and decision trees are easy to understand, so we know what the learner is up to. This makes it easier to figure out what it's doing right and wrong, fix the latter, and have confidence in the results.

    Connectionists, in particular, are highly critical of symbolist learning. According to them, concepts you can define with logical rules are only the tip of the iceberg; there's a lot going on under the surface that formal reasoning just can't see, in the same way that most of what goes on in our minds is subconscious.

    Chapter 4: How Does Your Brain Learn?

    Brains can perform a large number of computations in parallel, with billions of neurons working at the same time; but each of those computations is slow, because neurons can fire at best a thousand times per second. Some neurons have short axons and some have exceedingly long ones, reaching clear from one side of the brain to the other. Placed end to end, the axons in your brain would stretch from Earth to the moon.

    If one of the memories is the pattern of black-and-white pixels formed by the digit nine and the network sees a distorted nine, it will converge to the "ideal" one and thereby recognize it.

    Rather than a logic gate, a neuron is more like a voltage-to-frequency converter. The curve of frequency as a function of voltage looks like this:

    Hyperspace

    is a double-edged sword. On the one hand, the higher dimensional the space, the more room it has for highly convoluted surfaces and local optima. On the other hand, to be stuck in a local optimum you have to be stuck in every dimension, so it's more difficult to get stuck in many dimensions than it is in three.

    Neural networks are not compositional, and compositionality is a big part of human cognition. Another big issue is that humans—and symbolic models like sets of rules and decision trees—can explain their reasoning, while neural networks are big

    piles

    (堆叠) of numbers that no one can understand.

    Chapter 5: Evolution: Nature's Learning Algorithm

    The key input to a genetic algorithm, as Holland's creation came to be known, is a

    fitness function

    (适度函数:

    A fitness function is a particular type of objective function that is used to summarise how close a given design solution is to achieving the set aims. Fitness functions are used in genetic algorithms to guide simulations towards optimal design solutions

    ). Given a candidate program and some purpose it is meant to fill, the fitness function assigns the program a numeric score reflecting how well it fits the purpose.

    Which Holland called classifier systems, are one of the workhorses of the

    machine-learning tribe

    (机器学习部落): the evolutionaries. Like multilayer perceptrons, classifier systems face the credit-assignment problem (what is the fitness of rules for intermediate concepts?) and Holland devised the so-called bucket brigade algorithm to solve it.

    In 1972, Niles Eldredge and Stephen Jay Gould proposed that evolution consists of a series of "

    punctuated equilibria

    " (间断平衡) alternating long periods of stasis with short bursts of rapid change (在长期的静止和短暂的迅速爆发之间轮流交换).

    Once the algorithm reaches a local maximum of fitness—a peak in the fitness landscape—it will stay there for a long time until a lucky mutation (突变) or crossover lands an individual on the slope to a higher peak, at which point that individual will multiply and climb up the slope with each passing generation. And the higher the current peak, the longer before that happens.

    Genetic algorithms are full of random choices: which hypotheses to keep alive and cross over (with fitter hypotheses being more likely candidates), where to cross two strings, which bits to mutate.

    Genetic algorithms make no a priori assumptions about the structures they will learn, other than their general form.

    Holland showed that the fitter a schema's representatives in one generation are compared to the average, the more of them we can expect to see in the next generation. So, while the genetic algorithm explicitly manipulates strings, it implicitly searches the much larger space of schemas.

    A genetic algorithm is like the leader of a group of gamblers, playing

    slot machines

    (老虎机) in every casino in town at the same time. Two schemas compete with each other if they include the same bits and differ in at least one of them, like *10 and *11, and n competing schemas are like n slot machines. Every set of competing schemas is a casino, and the genetic algorithm simultaneously figures out the winning machine in every casino, following the optimal strategy of playing the better-seeming machines with exponentially increasing frequency.

    One consequence of crossing over program trees instead of bit strings is that the resulting programs can have any size, making the learning more flexible. The overall tendency is for bloat, however, with larger and larger trees growing as evolution goes on longer (also known as "survival of the fattest").

    Genetic programming's first success, in 1995, was in designing electronic circuits. Starting with a pile of electronic components such as transistors, resistors, and capacitors, Koza's system reinvented a previously patented design for a low-pass filter, a circuit that can be used for things like enhancing the bass on a dance-music track.

    None of Holland's theoretical results show that crossover (杂交) actually helps; mutation suffices to exponentially increase the frequency of the fittest schemas in the population over time.

    Engineers certainly use building blocks extensively, but combining them involves, well, a lot of engineering; it's not just a matter of throwing them together any old way, and it's not clear crossover can do the trick.

    "It takes all the running you can do, to keep in the same place." In this view, organisms are in a perpetual (无休止的) arms race with parasites (寄生虫), and sex helps keep the population varied, so that no single germ can infect all of it.

    Christos Papadimitriou and colleagues have shown that sex optimizes not fitness but what they call mixability: a gene's ability to do well on average when combined with other genes. This can be useful when the fitness function is either not known or not constant, as in natural selection, but in machine learning and optimization, hill climbing tends to do better.

    With or without crossover, evolving structure is an essential part of the Master Algorithm. The brain can learn anything, but it can't evolve a brain. The Master Algorithm is neither genetic programming nor backprop, but it has to include the key elements of both: structure learning and weight learning.

    In Baldwinian evolution, behaviors that are first learned later become genetically hardwired. If dog-like mammals (哺乳动物) can learn to swim, they have a better chance to evolve into seals (海豹) —as they did—than if they drown (淹死).

    Chapter 6: In the Church of Reverend Bayes

    For Bayesians, learning is "just" another application of Bayes' theorem, with whole models as the hypotheses and the data as the evidence: as you see more data, some models become more likely and some less, until ideally one model stands out as the clear winner.

    Laplace derived his so-called rule of succession, which estimates the probability that the sun will rise again after having risen n times as (n+1)/(n+2). When n=0, this is just 1/2; and as n increases, so does the probability, approaching 1 when n→∞.

    Humans are not very good at Bayesian inference, at least when verbal reasoning is involved. The problem is that we tend to neglect the cause's prior probability.

    I put just in quotes because implementing Bayes' theorem on a computer turns out (最终变成) to be fiendishly (极其困难地) hard for all but the simplest problems, for reasons that we're about to see.

    Each combination of symptoms and flu/not flu. A learner that uses Bayes' theorem and assumes the effects are independent given the cause is called a

    Naïve Bayes classifier

    .

    It might not seem so at first, but Naïve Bayes is closely related to the perceptron algorithm. The perceptron adds weights and Naïve Bayes multiplies probabilities, but if you take a logarithm, the latter reduces to the former. Both can be seen as generalizations of simple

    If ..., then ...

    rules.

    If the states and observations are continuous variables instead of discrete ones, the HMM becomes what's known as a Kalman filter.

    A more insidious (潜在的) problem is that with confidence-rated rules we're prone to double-counting evidence.

    Everything is connected, but only indirectly.

    In order to affect me, something that happens a mile away must first affect something in my neighborhood, even if only through the propagation of light. As one wag put it, space is the reason everything doesn't happen to you. Put another way, the structure of space is an instance of conditional independence.

    Naïve Bayes, Markov chains, and HMMs are all special cases of Bayesian networks.

    The structure of Naïve Bayes is:

    The trick in MCMC (Markov Chain Monte Carlo) is to design a Markov chain that converges to the distribution of our Bayesian network. One easy option is to repeatedly cycle through the variables, sampling each one according to its conditional probability given the state of its neighbors. People often talk about MCMC as a kind of simulation, but it's not: the Markov chain does not simulate any real process; rather, we

    concocted

    (虚构) it to efficiently generate samples from a Bayesian network, which is itself not a sequential model.

    This is justified by the so-called

    maximum likelihood principle

    : the likelihood of a hypothesis is P(data | hypothesis), and the principle says we should pick the hypothesis that maximizes it. For a Bayesian, in fact, there is no such thing as the truth; you have a prior distribution over hypotheses, after seeing the data it becomes the posterior distribution, as given by Bayes' theorem, and that's all.

    Bayesians can do something much more interesting. They can use the prior distribution to encode experts' knowledge about the problem—their answer to Hume's question. For example, we can design an initial Bayesian network for medical diagnosis by interviewing doctors, asking them which symptoms they think depend on which diseases, and adding the corresponding arrows. This is the "prior network," and the prior distribution can penalize alternative networks by the number of arrows that they add or remove from it.

    We can put a prior distribution on any class of hypotheses—sets of rules, neural networks, programs—and then update it with the hypotheses' likelihood given the data. The simplified graph structure makes the models learnable and is worth keeping, but then we're better off just learning the best parameters we can for the task at hand, irrespective (不受...的影响) of whether they're probabilities.

    In Markov networks we can also learn features using hill climbing, similar to rule induction. Either way, gradient descent is a good way to learn the weights. Markov networks can be

    trained

    to maximize either

    the likelihood of the whole data

    or

    the conditional likelihood of what we want to predict given what we know

    . For Siri, the likelihood of the whole data is P(words, sounds), and the conditional likelihood we're interested in is P(words | sounds). By optimizing the latter, we can ignore P(sounds), which is only a distraction from our goal. And since we ignore it, it can be arbitrarily complex. This is much better than HMMs' unrealistic assumption that sounds depend solely on the corresponding words, without any influence from the surroundings.

    Bayesian learning works on a single table of data

    , where each column represents a variable (for example, the expression level of one gene) and each row represents an instance (for example, a single microarray experiment, with each gene's observed expression level). It's OK if the table has "holes" and measurement errors because we can use probabilistic inference to fill in the holes and average over the errors.

    But if we have more than one table, Bayesian learning is stuck.

    It doesn't know how to, for example, combine gene expression data with data about which DNA segments get translated into proteins, and how in turn the three-dimensional shapes of those proteins cause them to lock on to different parts of the DNA molecule, affecting the expression of other genes. In logic, we can easily write rules relating all of these aspects, and learn them from the relevant combinations of tables—but only provided the tables have no holes or errors.

    All of the tribes we've met so far have one thing in common: they learn an explicit model of the phenomenon under consideration, whether it's a set of rules, a multilayer perceptron, a genetic program, or a Bayesian network. When they don't have enough data to do that, they're

    stumped

    (被难住). But analogizers can learn from as little as one example because they never form a model.

    Chapter 7: You Are What You Resemble

    Nearest-neighbor Algorithm, Support Vector Mechine and .

    Nearest-neighbor

    is the simplest and fastest learning algorithm ever invented. In fact, you could even say it's the fastest algorithm of any kind that could ever be invented.

    Scientists routinely use linear regression to predict continuous variables, but most phenomena are not linear. Luckily, they're locally linear because smooth curves are locally well approximated by straight lines. So if instead of trying to fit a straight line to all the data, you just fit it to the points near the query point, you now have a very powerful nonlinear regression algorithm.

    These days all kinds of algorithms are used to recommend items to users, but

    weighted k-nearest-neighbor

    was the first widely used one, and it's still hard to beat.

    Nearest-neighbor was the first algorithm in history that could take advantage of unlimited amounts of data to learn arbitrarily complex concepts.

    Nearest-neighbor is hopelessly confused by irrelevant attributes

    because they all contribute to the similarity between examples. With enough irrelevant attributes, accidental similarity in the irrelevant dimensions swamps out meaningful similarity in the important ones, and nearest-neighbor becomes

    no better than random guessing

    .

    It gets even worse. Nearest-neighbor is based on finding similar objects, and in high dimensions, the notion of similarity itself breaks down. Consider an orange: a tasty ball of pulp surrounded by a thin shell of skin. Let's say 90 percent of the radius of an orange is occupied by pulp, and the remaining 10 percent by skin. That means 73 percent of the volume of the orange is pulp (0.93). Now consider a hyperorange: still with 90 percent of the radius occupied by pulp, but in a hundred dimensions, say. The pulp has shrunk to only about three thousandths of a percent of the hyperorange's volume (0.9100). The hyperorange is all skin, and you'll never be done peeling it!

    With a high-dimensional normal distribution, you're more likely to get a sample far from the mean than close to it. A bell curve in hyperspace looks more like a doughnut than a bell.

    In fact, no learner is immune to

    the curse of dimensionality

    . It's the second worst problem in machine learning, after

    overfitting

    .

    To handle weakly relevant attributes, one option is to learn attribute weights. Instead of letting the similarity along all dimensions count equally, we "shrink" the less-relevant ones. Data is not spread uniformly in (hyper) space. The examples may have a thousand attributes, but in reality they all "live" in a much lower-dimensional space.

    The SVM chooses the support vectors and weights that yield the maximum possible margin.

    We have to maximize the margin under the constraint that the weights can only increase up to some fixed value. Or, equivalently, we can minimize the weights under the constraint that all examples have a given margin, which could be one—the precise value is arbitrary. This is what SVMs usually do.

    SVMs can be seen as a generalization of the perceptron, because a hyperplane boundary between classes is what you get when you use a particular similarity measure (the dot product between vectors). SVMs have a major advantage compared to multilayer perceptrons:

    the weights have a single optimum instead of many local ones and so learning them reliably is much easier.

    Provided you can learn them, networks with many layers can express many functions more compactly than SVMs, which always have just one layer, and this can make all the difference.

    It turns out that we can view what SVMs do with kernels, support vectors, and weights as mapping the data to a higher-dimensional space and finding a maximum-margin hyperplane in that space. For some kernels, the derived space has infinite dimensions, but SVMs are completely unfazed by that.

    Structure mapping

    takes two descriptions, finds a coherent correspondence between some of their parts and relations, and then, based on that correspondence, transfers further properties from one structure to the other.

    The problem is that all the learners we've seen so far need a teacher to tell them the right answer. They can't learn to distinguish tumor cells from healthy ones unless someone labels them "tumor" or "healthy." But humans can learn without a teacher; they do it from the day they're born.

    Chapter 8: Learnning Without a Teacher

    Above all, even though children certainly get plenty of help from their parents, they learn mostly on their own, without supervision, and that's what seems most miraculous.

    Whenever we want to learn a statistical model but are missing some crucial information (e.g., the classes of the examples), we can use

    EM (Expectation Maximization)

    .

    You might have noticed a certain resemblance between

    k-means

    and

    EM

    , in that they both alternate between assigning entities to clusters and updating the clusters' descriptions. This is not an accident: k-means itself is a special case of EM, which you get when all the attributes have "narrow" normal distributions, that is, normal distributions with very small variance.

    One of the most popular algorithms for nonlinear dimensionality reduction, called

    Isomap

    , connects each data point in a high-dimensional space to all nearby points, computes the shortest distances between all pairs of points along the resulting network and finds the reduced coordinates that best approximate these distances.

    In effect,

    reinforcement learning

    is a kind of speeded-up evolution—trying, discarding, and refining actions within a single lifetime instead of over generations—and by that standard it's extremely efficient.

    Chris Watkins sees many things children can do that reinforcement learners can't: solve problems, solve them better after a few attempts, make plans, acquire increasingly abstract knowledge. Luckily, we also have learning algorithms for these higher-level abilities, the most important of which is

    chunking

    . Crucially, grouping things into chunks allows us to process much more information than we otherwise could. A chunk in this sense has two parts: the

    stimulus

    (刺激物) (a pattern you recognize in the external world or in your short-term memory) and the response (the sequence of actions you execute as a result).

    In

    nonrelational learning

    , the parameters of a model are tied in only one way: across all the independent examples (e.g., all the patients we've diagnosed). In

    relational learning

    , every feature template we create ties the parameters of all its instances.

    Chapter 9: The Pieces Of The Puzzle Fall Into Place

    Although it is less well known, many of the most important technologies in the world are the result of inventing a

    unifier

    (统一者), a single mechanism that does what previously required many. As it turns out, it's not hard to combine many different learners into one, using what is known as metalearning. Netflix, Watson, Kinect, and countless others use it, and it's one of the most powerful arrows in the machine learner's quiver (箭筒). It's also a stepping-stone to the deeper unification that will follow.

    Bagging generates random variations of the training set by resampling, applies the same learner to each one, and combines the results by voting.

    One of the cleverest metalearners is

    boosting

    , created by two learning theorists, Yoav Freund and Rob Schapire. Instead of combining different learners, boosting repeatedly applies the same classifier to the data, using each new model to correct the previous ones' mistakes. It does this by assigning weights to the training examples; the weight of each misclassified example is increased after each round of learning, causing later rounds to focus more on it.

    As you approach it from a distance, you can see that the city is made up of three concentric circles, each bounded by a wall. The outer and by far widest circle is Optimization Town. Each house here is an algorithm.

    Representation

    is the formal language in which the learner expresses its models.

    The symbolists' formal language is logic

    , of which rules and decision trees are special cases.

    The connectionists' is neural networks

    .

    The evolutionaries' is genetic programs

    , including classifier systems.

    The Bayesians' is graphical models

    , an umbrella term for Bayesian networks and Markov networks.

    The analogizers' is specific instances

    , possibly with weights, as in an SVM.

    Evaluation

    is a scoring function that says how good a model is.

    Symbolists use accuracy or information gain

    .

    Connectionists use a continuous error measure

    , such as squared error, which is the sum of the squares of the differences between the predicted values and the true ones.

    Bayesians use the posterior probability

    .

    Analogizers (at least of the SVM stripe) use the margin

    . In addition to how well the model fits the data, all tribes take into account other desirable properties, such as the

    model's simplicity

    .

    Optimization

    is the algorithm that searches for the highest-scoring model and returns it. (学习的过程)

    The symbolists' characteristic search algorithm is inverse deduction

    .

    The connectionists' is gradient descent

    .

    The evolutionaries' is genetic search

    , including crossover and mutation.

    The Bayesians

    are unusual in this regard: they don't just look for the best model, but

    average over all models, weighted by how probable they are.

    To do the weighting efficiently, they use probabilistic inference algorithms like MCMC.

    The analogizers

    (or more precisely, the SVM mavens)

    use constrained optimization to find the best model

    .

    Chapter 10: This Is The World Of Machine Learning

    Eventually, we'll start talking about the employment rate instead of the unemployment one and reducing it will be seen as a sign of progress. People will seek meaning in human relationships, self-actualization, and spirituality, much as they do now. The need to earn a living will be a distant memory, another piece of humanity's barbaric past that we rose above.

    Technology is the extended

    phenotype

    (表现型) of man. This means we can continue to control it even if it becomes far more complex than we can understand. People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world.

    The statistician knows that prediction is hard, especially about the future, and the computer scientist knows that the best way to predict the future is to invent it, but the unexamined future is not worth inventing.


书籍真实打分

  • 故事情节:3分

  • 人物塑造:9分

  • 主题深度:7分

  • 文字风格:9分

  • 语言运用:6分

  • 文笔流畅:4分

  • 思想传递:8分

  • 知识深度:5分

  • 知识广度:5分

  • 实用性:3分

  • 章节划分:9分

  • 结构布局:5分

  • 新颖与独特:5分

  • 情感共鸣:9分

  • 引人入胜:7分

  • 现实相关:8分

  • 沉浸感:3分

  • 事实准确性:5分

  • 文化贡献:3分


网站评分

  • 书籍多样性:4分

  • 书籍信息完全性:5分

  • 网站更新速度:5分

  • 使用便利性:7分

  • 书籍清晰度:3分

  • 书籍格式兼容性:5分

  • 是否包含广告:3分

  • 加载速度:7分

  • 安全性:7分

  • 稳定性:8分

  • 搜索功能:8分

  • 下载便捷性:8分


下载点评

  • 微信读书(511+)
  • txt(96+)
  • 速度快(461+)
  • 赚了(91+)
  • 无多页(596+)
  • 书籍完整(593+)
  • 品质不错(420+)
  • 好评(301+)
  • 可以购买(559+)
  • 字体合适(546+)
  • 章节完整(265+)
  • 排版满分(498+)
  • 方便(128+)

下载评价

  • 网友 饶***丽: ( 2024-12-25 19:25:58 )

    下载方式特简单,一直点就好了。

  • 网友 曾***玉: ( 2024-12-11 00:36:20 )

    直接选择epub/azw3/mobi就可以了,然后导入微信读书,体验百分百!!!

  • 网友 訾***雰: ( 2025-01-08 17:08:00 )

    下载速度很快,我选择的是epub格式

  • 网友 家***丝: ( 2024-12-17 12:32:22 )

    好6666666

  • 网友 游***钰: ( 2024-12-28 12:28:13 )

    用了才知道好用,推荐!太好用了

  • 网友 隗***杉: ( 2024-12-21 01:24:18 )

    挺好的,还好看!支持!快下载吧!

  • 网友 敖***菡: ( 2024-12-26 00:21:45 )

    是个好网站,很便捷

  • 网友 沈***松: ( 2025-01-07 17:57:08 )

    挺好的,不错

  • 网友 谭***然: ( 2024-12-21 22:07:36 )

    如果不要钱就好了


随机推荐