Λέσχη Φίλων Στατιστικής - GrStats forum
Σεμινάριο Τομέα Μαθηματικών ΣΕΜΦΕ 29.03 - Daniel Schmidt (Monash University, Australia) Forumgrstats

Join the forum, it's quick and easy

Λέσχη Φίλων Στατιστικής - GrStats forum
Σεμινάριο Τομέα Μαθηματικών ΣΕΜΦΕ 29.03 - Daniel Schmidt (Monash University, Australia) Forumgrstats
Λέσχη Φίλων Στατιστικής - GrStats forum
Would you like to react to this message? Create an account in a few clicks or log in to continue.
Για προβλήματα εγγραφής και άλλες πληροφορίες επικοινωνήστε με : grstats.forum@gmail.com ή grstats@stat-athens.aueb.gr

Go down
grstats
grstats
Posts : 959
Join date : 2009-10-21
http://stat-athens.aueb.gr/~grstats/

Σεμινάριο Τομέα Μαθηματικών ΣΕΜΦΕ 29.03 - Daniel Schmidt (Monash University, Australia) Empty Σεμινάριο Τομέα Μαθηματικών ΣΕΜΦΕ 29.03 - Daniel Schmidt (Monash University, Australia)

Thu 21 Mar 2024 - 18:48

ΑΝΑΚΟΙΝΩΣΗ

Ομιλία του κ. Daniel Schmidt (Associate Professor of Computer Science at the Department of Data Science and AI, Monash University, Australia, https://research.monash.edu/en/persons/daniel-schmidt) στο Σεμινάριο του Τομέα Μαθηματικών ΣΕΜΦΕ, την ερχόμενη

Παρασκευή 29 Μαρτίου, στις 13:00 στην αίθουσα Σεμιναρίων του Τομέα Μαθηματικών ΣΕΜΦΕ.

Τίτλος και περίληψη ακολουθούν.

------------------------------

Title: Prevalidated ridge regression as a highly-efficient drop-in replacement for logistic regression for high-dimensional data

Abstract: Linear models are widely used in classification and are particularly effective for high-dimensional data where linear decision boundaries/separating hyperplanes are often effective for separating classes, even for complex data. A recent example of a technique effectively utilising linear classifiers is the ROCKET family of classifiers for time series classification. One reason that the ROCKET family is so fast is due to its use of a linear classifier based around standard squared-error ridge regression. Fitting a linear model based on squared-error is significantly faster and more stable than fitting a standard regularised multinomial logistic regression based on logarithmic-loss (i.e., regularised maximum likelihood), as in the latter case the solutions can only be found via a numerical search. While fast, one drawback of using squared-error ridge-regression is that it is unable to produce probabilistic predictions. I will demonstrate some very recent work on how to use regular ridge-regression to train L2-regularized multinomial logistic regression models for very large numbers of features, including choosing a suitable degree of regularization, with a time complexity that is no greater than single ordinary least-squares fit. This in contrast to logistic regression, which requires a full refit for every value of regularisation parameter considered, and every fold used for cross-validation. Using our new approach allows for models based on linear classifier technology to provide well calibrated probabilistic predictions with minimal additional computational overhead. If time permits, I will also discuss some thoughts on when such linear classifiers would be expected to perform well.
Back to top
Permissions in this forum:
You cannot reply to topics in this forum