Marknadens största urval
Snabb leverans

Böcker i Machine Learning: Foundations,-serien

Filter
Filter
Sortera efterSortera Serieföljd
  • av Fengxiang He
    1 969,-

    Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a "cloud" to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the "effective" hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability. We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.

  • av Liang Feng
    2 129,-

    A remarkable facet of the human brain is its ability to manage multiple tasks with apparent simultaneity. Knowledge learned from one task can then be used to enhance problem-solving in other related tasks. In machine learning, the idea of leveraging relevant information across related tasks as inductive biases to enhance learning performance has attracted significant interest. In contrast, attempts to emulate the human brain¿s ability to generalize in optimization ¿ particularly in population-based evolutionary algorithms ¿ have received little attention to date. Recently, a novel evolutionary search paradigm, Evolutionary Multi-Task (EMT) optimization, has been proposed in the realm of evolutionary computation. In contrast to traditional evolutionary searches, which solve a single task in a single run, evolutionary multi-tasking algorithm conducts searches concurrently on multiple search spaces corresponding to different tasks or optimization problems,each possessing a unique function landscape. By exploiting the latent synergies among distinct problems, the superior search performance of EMT optimization in terms of solution quality and convergence speed has been demonstrated in a variety of continuous, discrete, and hybrid (mixture of continuous and discrete) tasks. This book discusses the foundations and methodologies of developing evolutionary multi-tasking algorithms for complex optimization, including in domains characterized by factors such as multiple objectives of interest, high-dimensional search spaces and NP-hardness.

  • av Yiqiang Chen
    829,-

    Transfer learning is one of the most important technologies in the era of artificial intelligence and deep learning. It seeks to leverage existing knowledge by transferring it to another, new domain. Over the years, a number of relevant topics have attracted the interest of the research and application community: transfer learning, pre-training and fine-tuning, domain adaptation, domain generalization, and meta-learning. This book offers a comprehensive tutorial on an overview of transfer learning, introducing new researchers in this area to both classic and more recent algorithms. Most importantly, it takes a ¿student¿s¿ perspective to introduce all the concepts, theories, algorithms, and applications, allowing readers to quickly and easily enter this area. Accompanying the book, detailed code implementations are provided to better illustrate the core ideas of several important algorithms, presenting good examples for practice.

  • av Yi Mei, Mengjie Zhang, Fangfang Zhang & m.fl.
    1 869,-

  • av Alexander Jung
    795,-

Gör som tusentals andra bokälskare

Prenumerera på vårt nyhetsbrev för att få fantastiska erbjudanden och inspiration för din nästa läsning.