Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
Here are the summary of some of talks from NAACL 2021. So far, I only summarized the following papers, I will be summarizing more. So, I will either append it here or a new blog post. Feel free to check back soon.
Published:
ICLR 2021
Published:
In the past couple of years, Transformers has acheived state of art results in a variety of natural language tasks. In order to better understand Transformers and what they are learning in practice, researchers have done layer-wise analysis of Transformer’s hidden states to understand what the Transformer is learning in each layer. A wave of recent work has started to “prob” the state of the art Tranformers to inspect the structure of the network to assess whether there exist localizable regions associated with distinct types of linguistic decisions, both syntactic and semantic information. Researchers examine the hidden states between encoder layers directly and use those hidden states in a linear layer + softmax to predict what kind of information in encoded in each hidden state.
Published:
This blog post is the continuation of my previous blog post, Transformers. In my previous blog post, I explained original Transformer paper, BERT, GPT, XLNet, RoBERTa, ALBERT, BART, and AMBER. In this blog post, I will explain MARGE, ConveRT, Generalization through Memorization, AdapterHub, and T5. Images and content used in this blogpost, otherwise mentioned, are all taken from the papers on each model.
Published:
Automatic summarization is the process of shortening a set of data computationally, to create a subset (a summary) that represents the most important or relevant information within the original content. Text summarization finds the most informative sentences in a document.
Published:
These are the most important transformer papers (in my opinion) that anyone working with Transformers should know. Also, there is a nice summary of Efficient Transformers: A Survey by folks at Google that I highly recommend as well.
Published:
Transformers: This post contains my notes throughout years on different transformers. These notes are very crude and not edited yet (more like my cheat sheets), but I thought to share it anyway. Please let me know if you have any comments or if you find any mistakes. Images used in this blogpost, otherwise mentioned, are all taken from the papers on each model.
Published:
My Colab notebook on fine tuning T5 model for summarization task using Trenasformers + PyTorch Lightning
Published:
In this post, I briefly explain what is conditional random Fields and how they can be used for sequence labeling. CRF is a discriminative model best suited for tasks in which contextual information or state of the neighbors affects the current prediction. CRFs are widely used in named entity recognition, part of speech tagging, gene prediction, noise reduction, and object detection problems.
Published:
In this post, I will discuss what is knowledge distillation (also refered as Student-Teacher Learning), what is the intuition behind it, and why it works!
Published:
My Colab notebook on Masked Language Modeling (MLM) + Fine Tuning for Text Classification with BERT. In this notebook, you can see how to train a BERT model on your data for MLM task and then fine tune it for text classification. This includes how to encode the data, masked the tokens (similar to here) and train a model from scratch (or train on a pretrained model :). You can load this model and fine tuned it on your labeled data for classification.
Published:
Published:
2018 Conference on Digital Experimentation (CODE)
Published:
There was so much happening at NAACL; so many interesting works on all sorts of (old and new) NLP problems. Lots of papers focused on how to generalize the models beyond the conditions during training. In addition, there was workshop on “New Forms of Generalization in Deep Learning and Natural Language Processing”. In that workshop, Yejin Choi pointed out that natural language understanding (NLU) does not generalize to natural language generation (NLG). Another focus of the conference/workshops were on dialogue systems and chatbots. Lots of talks focused on using a knowledge graph in chatbots to have deeper conversations without staying on the topic for the whole conversations.
Published:
See here
Published in , 2012
Recommended citation: Andrei Lapets, Richard Skowyra, Christine Bassem, Sanaz Bahargam, Azer Bestavros, Assaf Kfoury TP2012.
Published in 2013 IEEE High Performance Extreme Computing Conference , 2013
Recommended citation: Sanaz Bahargam, Richard Skowyra, Azer Bestavros HPEC2013.
Published in , 2013
Recommended citation: Saber Mirzaei, Sanaz Bahargam, Richard Skowyra, Assaf Kfoury, Azer Bestavros TP2013.
Published in The 8 International Conference on Educational Data Mining, 2015
Recommended citation: Sanaz Bahargam, Dóra Erdos, Azer Bestavros, Evimaria Terzi EDM2015.
Published in Winter Conference on Business Intelligence, 2016, 2016
Recommended citation: Sanaz Bahargam, Theodoros Lappas WCBI 2016.
Published in IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 2018
Recommended citation: Sanaz Bahargam, Evangelos Papalexakis ASONAM 2018.
Published in 2018 International Conference on Computational Social Science , 2018
Recommended citation: Sanaz Bahargam, Evangelos Papalexakis IC2S2 2018.
Published in The 12 International Conference on Educational Data Mining, 2019
Recommended citation: Sanaz Bahargam, Theodoros Lappas, Evimaria Terzi EDM2019.
Published in Expert Systems with Applications, 2019
Recommended citation: Sanaz Bahargam, Behzad Golshan, Theodoros Lappas, Evimaria Terzi Expert Systems with Applications 2019.
Undergraduate course, Boston University, Computer Science Department, 2012
Undergraduate course, Boston University, Computer Science Department, 2014
Undergraduate course, Boston University, Computer Science Department, 2015
Undergraduate course, Boston University, Computer Science Department, 2016
Undergraduate course, Boston University, Computer Science Department, 2016
Graduate course, Boston University, Computer Science Department, 2017