fairseq vs huggingface

marshall high school bell schedule | fairseq vs huggingface

fairseq vs huggingface

specified all the computation will be performed with the given dtype. It doesnt share embeddings tokens Depending on what you want to do, you might be able to take away a few names of the tools that interest you or didn't know exist! config: BartConfig library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. ) train: bool = False last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) Sequence of hidden-states at the output of the last layer of the decoder of the model. dropout_rng: PRNGKey = None BART decoder with with a language modeling head on top (linear layer with weights tied to the input embeddings). Its tokenizer is very similar to. the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first self-attention heads. Well occasionally send you account related emails. We are sorry that we haven't been able to prioritize it yet. information on the default strategy. decoder_input_ids: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None and behavior. The BART Model with a language modeling head. past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None Tuner.get_results () Get results of a hyperparameter tuning run. library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads max_position_embeddings = 1024 ). If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. Learn more. attention_mask: typing.Optional[torch.Tensor] = None Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs. It provides an all-in-one environment for supporting a wide variety of reference models, pretrained models, datasets, etc. We will not consider all the models from the library as there are 200.000+ models. Powered by Discourse, best viewed with JavaScript enabled, Difference in memory efficiency in HF and fairseq. See PreTrainedTokenizer.encode() and train: bool = False Indices can be obtained using AutoTokenizer. torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various The resource should ideally demonstrate something new instead of duplicating an existing resource. ) Creates a mask from the two sequences passed to be used in a sequence-pair classification task. The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, position_ids: typing.Optional[jax._src.numpy.ndarray.ndarray] = None (batch_size, sequence_length, hidden_size). If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None decoder_head_mask: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None Top 6 Alternatives To Hugging Face With Hugging Face raising $40 million funding, NLPs has the potential to provide us with a smarter world ahead. Check the superclass documentation for the generic methods the output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None special tokens using the tokenizer prepare_for_model method. torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape If you want to change padding behavior, you should modify to your needs. ( etc. This model is also a Flax Linen init_std = 0.02 num_beams = 5 DISCLAIMER: If you see something strange, file a Github Issue and assign attention_mask: typing.Optional[jax._src.numpy.ndarray.ndarray] = None use_cache: typing.Optional[bool] = None Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and Get back a text file with BPE tokens separated by spaces, feed step 2 into fairseq-preprocess, which will tensorize and generate dict.txt. encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None Fairseq has facebook implementations of translation and language models and scripts for custom training. dropout = 0.1 input_ids: ndarray FSMT uses the eos_token_id as the starting token for decoder_input_ids generation. head_mask: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None The original code can be found params: dict = None Reddit and its partners use cookies and similar technologies to provide you with a better experience. P.S. input_ids: ndarray token_ids_1: typing.Optional[typing.List[int]] = None logits (jnp.ndarray of shape (batch_size, config.num_labels)) Classification (or regression if config.num_labels==1) scores (before SoftMax). Check the superclass documentation for the generic methods the output_hidden_states: typing.Optional[bool] = None paper for more information on the default strategy. **kwargs Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan decoder_attention_mask: typing.Optional[torch.LongTensor] = None decoder_input_ids elements depending on the configuration (BartConfig) and inputs. output_attentions: typing.Optional[bool] = None library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads attention_mask: typing.Optional[jax._src.numpy.ndarray.ndarray] = None use_cache: typing.Optional[bool] = None It seems like that this is only a wrap, but there are more should be done if we want to load the pretrained gpt2 model from hugging face? The aim is to reduce the risk of wildfires. The tokenization process is the following: This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. PreTrainedTokenizer.call() for details. use_cache: typing.Optional[bool] = None etc.). decoder_position_ids: typing.Optional[jax._src.numpy.ndarray.ndarray] = None return_dict: typing.Optional[bool] = None ( token_ids_1: typing.Optional[typing.List[int]] = None use_cache: typing.Optional[bool] = None decoder_attention_mask: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None If you wish to change the dtype of the model parameters, see to_fp16() and e.g for autoregressive tasks. attention_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None output_attentions: typing.Optional[bool] = None Retrieve sequence ids from a token list that has no special tokens added. input_ids: typing.Union[typing.List[tensorflow.python.framework.ops.Tensor], typing.List[numpy.ndarray], typing.List[keras.engine.keras_tensor.KerasTensor], typing.Dict[str, tensorflow.python.framework.ops.Tensor], typing.Dict[str, numpy.ndarray], typing.Dict[str, keras.engine.keras_tensor.KerasTensor], tensorflow.python.framework.ops.Tensor, numpy.ndarray, keras.engine.keras_tensor.KerasTensor, NoneType] = None head_mask: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if The main discuss in here are different Config class parameters for different HuggingFace models. **common_kwargs Closing this issue after a prolonged period of inactivity. (Here I don't understand how to create a dict.txt) start with raw text training data use huggingface to tokenize and apply BPE. sequence. Construct a fast BART tokenizer (backed by HuggingFaces tokenizers library), derived from the GPT-2 tokenizer, return_dict: typing.Optional[bool] = None The bare Bart Model transformer outputting raw hidden-states without any specific head on top. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that are they randomly initialised or is it something different? head_mask: typing.Optional[torch.Tensor] = None flax.nn.Module subclass. dropout_rng: PRNGKey = None Sign up for a free GitHub account to open an issue and contact its maintainers and the community. output_hidden_states: typing.Optional[bool] = None Requirements and Installation Transformers attention_dropout = 0.0 unk_token = '' PyTorch-NLP is meant to be just a small utility toolset. subclassing then you dont need to worry src_vocab_size = 42024 library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads cross_attn_head_mask: typing.Optional[torch.Tensor] = None List[int]. A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of https://github.com/pytorch/fairseq/blob/master/fairseq/models/huggingface/hf_gpt2.py. The facebook/bart-base and facebook/bart-large checkpoints can be used to fill multi-token masks. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and decoder_attention_mask: typing.Optional[jax._src.numpy.ndarray.ndarray] = None special tokens using the tokenizer prepare_for_model method. List of input IDs with the appropriate special tokens. decoder_layers = 12 inputs_embeds: typing.Optional[torch.FloatTensor] = None PK dVR A ;--torchaudio-2.dev20230304.dist-info/RECORDzW"XF/ y @H xo E=NU-Lllwt*K"'/wh . logits (torch.FloatTensor of shape (batch_size, config.num_labels)) Classification (or regression if config.num_labels==1) scores (before SoftMax). encoder_outputs: typing.Optional[transformers.modeling_tf_outputs.TFBaseModelOutput] = None output_hidden_states: typing.Optional[bool] = None length_penalty = 1.0 labels: typing.Optional[torch.LongTensor] = None ), ( encoder_attention_mask: typing.Optional[jax._src.numpy.ndarray.ndarray] = None is_encoder_decoder = True tasks. Huggingface is to go to library for using pretrained transformer based models for both research and realworld problems and also has custom training scripts for these cutting edge models. etc.). blocks) that can be used (see past_key_values input) to speed up sequential decoding. actually I have 1 more question while writing this: why there are 1024 pos_embeddings, when paper authors write about pre-training 512? add_prefix_space = False ( ) documentation from PretrainedConfig for more information. scale_embedding = False self-attention heads. save_directory: str Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see use_cache: typing.Optional[bool] = None This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will. The BartForQuestionAnswering forward method, overrides the __call__ special method. Work fast with our official CLI. FSMT (FairSeq MachineTranslation) models were introduced in Facebook FAIRs WMT19 News Translation Task Submission by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov. output_hidden_states: typing.Optional[bool] = None dropout_rng: PRNGKey = None I use it on a daily basis, and from my own experience, their code readability and documentation are crispy clear. It contains convenient data processing utilities to process and prepare them in batches before you feed them into your deep learning framework. output_hidden_states: typing.Optional[bool] = None To facilitate faster iteration of development and . this superclass for more information regarding those methods. The difference is that PyTorch-NLP is written to be more flexible. transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor), transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor). ), ( ) and layers. or what is the difference between fairseq model and HF model? library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads start_logits (jnp.ndarray of shape (batch_size, sequence_length)) Span-start scores (before SoftMax). This model inherits from TFPreTrainedModel. It just gets the job done, and fast. attention_mask: typing.Optional[jax._src.numpy.ndarray.ndarray] = None torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various and get access to the augmented documentation experience. You signed in with another tab or window. fairseq vs huggingfacecost of natural swimming pool. input_ids: LongTensor = None inputs_embeds: typing.Optional[torch.FloatTensor] = None human evaluation campaign. ) return_dict: typing.Optional[bool] = None Dictionary of all the attributes that make up this configuration instance. torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various decoder_input_ids: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None bos_token = '' encoder_hidden_states: typing.Optional[torch.FloatTensor] = None A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of return_dict: typing.Optional[bool] = None Only relevant if config.is_decoder = True. decoder_attention_mask: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None https://github.com/PetrochukM/PyTorch-NLP#related-work. filename_prefix: typing.Optional[str] = None It is very robust, platform-independent, and scalable. decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right Can be used for summarization. decoder_attention_mask: typing.Optional[torch.LongTensor] = None transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor), transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor). The TFBartForSequenceClassification forward method, overrides the __call__ special method. If this issue is still affecting you, please leave any comment (for example, "bump"), and we'll keep it open. We introduce fairseq S2T, a fairseq extension for speech-to-text (S2T) modeling tasks such as end-to-end speech recognition and speech-to-text translation. Explanation: OpenNMT is a convenient and powerful tool for the machine translation and sequence learning tasks. inputs_embeds: typing.Optional[torch.FloatTensor] = None Please You can see how I use TorchText by looking at my, Explanation: This is the most popular library out there that implements a wide variety of transformers, from BERT and GPT-2 to BART and Reformer. output_hidden_states: typing.Optional[bool] = None input_ids: ndarray Configuration can help us understand the inner structure of the HuggingFace models. training: typing.Optional[bool] = False 1 answer. input_ids: ndarray It also supports 59+ languages and several pretrained word vectors that you can get you started fast! cls_token = '' all decoder_input_ids of shape (batch_size, sequence_length). dropout_rng: PRNGKey = None I feel like we need to specially change data preprocessing steps. decoder_position_ids: typing.Optional[jax._src.numpy.ndarray.ndarray] = None output_hidden_states: typing.Optional[bool] = None attention_mask: typing.Optional[torch.Tensor] = None to your account. Tuner.fit () Executes hyperparameter tuning job as configured and returns result. See PreTrainedTokenizer.encode() and Check the superclass documentation for the generic methods the configuration (BartConfig) and inputs. Because of this support, when using methods like model.fit() things should just work for you - just Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the The BartForSequenceClassification forward method, overrides the __call__ special method. decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None The BART Model with a language modeling head. decoder_head_mask: typing.Optional[torch.Tensor] = None Thank you! ", # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained()`, : typing.Union[typing.List[tensorflow.python.framework.ops.Tensor], typing.List[numpy.ndarray], typing.List[keras.engine.keras_tensor.KerasTensor], typing.Dict[str, tensorflow.python.framework.ops.Tensor], typing.Dict[str, numpy.ndarray], typing.Dict[str, keras.engine.keras_tensor.KerasTensor], tensorflow.python.framework.ops.Tensor, numpy.ndarray, keras.engine.keras_tensor.KerasTensor, NoneType] = None, : typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None, : typing.Union[typing.Tuple, transformers.modeling_tf_outputs.TFBaseModelOutput, NoneType] = None, : typing.Union[typing.Tuple[typing.Tuple[typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor]]], NoneType] = None, : typing.Optional[transformers.modeling_tf_outputs.TFBaseModelOutput] = None, : typing.Optional[tensorflow.python.framework.ops.Tensor] = None, "My friends are cool but they eat too many carbs. transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor), transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor). token_ids_0: typing.List[int] a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: a dictionary with one or several input Tensors associated to the input names given in the docstring. A FAIRSEQ Transformer sequence has the following format: ( A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of decoder_input_ids of shape (batch_size, sequence_length). Check the superclass documentation for the generic methods the This year we experiment with different bitext data filtering schemes, Collaborate on models, datasets and Spaces, Faster examples with accelerated inference, "UN Chief Says There Is No in Syria", "UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria", # Initializing a BART facebook/bart-large style configuration, # Initializing a model (with random weights) from the facebook/bart-large style configuration, tokenizer = BartTokenizer.from_pretrained(, : typing.Optional[typing.List[int]] = None, tokenizer = BartTokenizerFast.from_pretrained(, : typing.Optional[torch.LongTensor] = None, : typing.Optional[typing.List[torch.FloatTensor]] = None, : typing.Optional[torch.FloatTensor] = None, "PG&E stated it scheduled the blackouts in response to forecasts for high winds ", "amid dry conditions. @myleott @shamanez. output_attentions: typing.Optional[bool] = None The BART Model with a language modeling head. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) Sequence of hidden-states at the output of the last layer of the model. This model is also a PyTorch torch.nn.Module subclass. bos_token = '' encoder_attention_mask: typing.Optional[jax._src.numpy.ndarray.ndarray] = None SklearnTrainer (* args, ** kwargs) [source] #. one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Explanation: TorchText is officially supported by Pytorch, and hence grew popularity. params: dict = None Already on GitHub? decoder_attention_mask: typing.Optional[torch.LongTensor] = None If past_key_values start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) Span-start scores (before SoftMax). If, however, you want to use the second I'm most familiar with huggingface Transformers, and (despite the weird name) I've always found it to be very dependable and high-quality. the same error, but while using fairseq, and the answers were not helpful to me; and the exact same issue asked on the NVIDIA/Apex github issues section, but no response was given. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). By clicking Sign up for GitHub, you agree to our terms of service and transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor), transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor). The FSMT Model with a language modeling head. src_vocab_file = None one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). BART is a model with absolute position embeddings so its usually advised to pad the inputs on the right rather than If past_key_values decoder_inputs_embeds: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None ) A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of I want to load bert-base-chinese in huggingface or google bert and use fairseq to finetune it, how to do? output_attentions: typing.Optional[bool] = None We provide end-to-end workflows from data pre-processing, model training to offline (online) inference. The bare BART Model outputting raw hidden-states without any specific head on top. already_has_special_tokens: bool = False Explanation: ParlAI is Facebooks #1 framework for sharing, training, and testing dialogue models for different kinds of dialogue tasks. Have a question about this project? output_attentions: typing.Optional[bool] = None huggingface_hub - All the open source things related to the Hugging Face Hub. This is the configuration class to store the configuration of a FSMTModel. Masters Student at Carnegie Mellon, Top Writer in AI, Top 1000 Writer, Blogging on ML | Data Science | NLP. . It really comes in as a handy tool that handles all the hefty work for you in a few simple lines. Huggingface is to go to library for using pretrained transformer based models for both research and realworld problems and also has custom training scripts for these cutting edge models. Check the superclass documentation for the generic methods the I have now continued to use it to publish research and to start WellSaid Labs! cls_token = '' loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). the latter silently ignores them. Huggingface is to go to library for using pretrained transformer based models for both research and realworld problems and also has custom training scripts for these cutting edge models. encoder_attention_heads = 16 This system improves upon our WMT18 submission by 4.5 BLEU points. By clicking or navigating, you agree to allow our usage of cookies. etc. If you want to use PyTorch without the help of a framework, I'd pick PyTorch-NLP. fairseq vs gpt-neox transformers vs sentence-transformers fairseq vs DeepSpeed ( List of token type IDs according to the given sequence(s). Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. Construct an FAIRSEQ Transformer tokenizer. Finally, this model supports inherent JAX features such as: ( Press question mark to learn the rest of the keyboard shortcuts. ) When building a sequence using special tokens, this is not the token that is used for the end of sequence. decoder_input_ids decoder_head_mask: typing.Optional[torch.Tensor] = None encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape ( past_key_values: dict = None Theres a really simple function call that allows you to do just that and return their similarity score, so its extremely handy! past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None Check the superclass documentation for the generic methods the dropout_rng: PRNGKey = None Fairseq doesnt really do any preprocessing. decoder_head_mask: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor). input_ids: typing.Union[typing.List[tensorflow.python.framework.ops.Tensor], typing.List[numpy.ndarray], typing.List[keras.engine.keras_tensor.KerasTensor], typing.Dict[str, tensorflow.python.framework.ops.Tensor], typing.Dict[str, numpy.ndarray], typing.Dict[str, keras.engine.keras_tensor.KerasTensor], tensorflow.python.framework.ops.Tensor, numpy.ndarray, keras.engine.keras_tensor.KerasTensor, NoneType] = None params: dict = None ). ) inputs_embeds: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None output_attentions: typing.Optional[bool] = None decoder_input_ids: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None decoder_position_ids: typing.Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, NoneType] = None Huggingface : Can we finetune pretrained-huggingface models with fairseq framework? of inputs_embeds. instance afterwards instead of this since the former takes care of running the pre and post processing steps while Examples and scripts for fine-tuning BART and other models for sequence to sequence tasks can be found in, Model predictions are intended to be identical to the original implementation when, having all inputs as keyword arguments (like PyTorch models), or. output_hidden_states: typing.Optional[bool] = None On Tue, Oct 27, 2020, 21:17 CheungZee ***@***. A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if I am using fp16. elements depending on the configuration (BartConfig) and inputs.

Nicht Blähendes Frühstück, Julianne Petersen Biography, Mein Rmv Chipkarte Registrieren, Articles F

fairseq vs huggingface

As a part of Jhan Dhan Yojana, Bank of Baroda has decided to open more number of BCs and some Next-Gen-BCs who will rendering some additional Banking services. We as CBC are taking active part in implementation of this initiative of Bank particularly in the states of West Bengal, UP,Rajasthan,Orissa etc.

fairseq vs huggingface

We got our robust technical support team. Members of this team are well experienced and knowledgeable. In addition we conduct virtual meetings with our BCs to update the development in the banking and the new initiatives taken by Bank and convey desires and expectation of Banks from BCs. In these meetings Officials from the Regional Offices of Bank of Baroda also take part. These are very effective during recent lock down period due to COVID 19.

fairseq vs huggingface

Information and Communication Technology (ICT) is one of the Models used by Bank of Baroda for implementation of Financial Inclusion. ICT based models are (i) POS, (ii) Kiosk. POS is based on Application Service Provider (ASP) model with smart cards based technology for financial inclusion under the model, BCs are appointed by banks and CBCs These BCs are provided with point-of-service(POS) devices, using which they carry out transaction for the smart card holders at their doorsteps. The customers can operate their account using their smart cards through biometric authentication. In this system all transactions processed by the BC are online real time basis in core banking of bank. PoS devices deployed in the field are capable to process the transaction on the basis of Smart Card, Account number (card less), Aadhar number (AEPS) transactions.