Search PPTs

Friday, July 19, 2013

PowerPoint Presentation On Soft Transducer Materials For Energy Conversion

PPT On Soft Transducer Materials For Energy Conversion

Soft Transducer Materials For Energy Conversion Presentation Transcript:
1.Soft Transducer Materials For Energy Conversion



4.What is piezoelectricity?
Response of applied mechanical stress on the material  

Mechanical Stress Electric Potential

6.A piezoelectric disk generates a voltage when deformed

In the crystal +ve
and –ve charges gets
aligned symmetrically
so each of the side
forms an electric dipole.
when a mechanical
Stress is applied this
symmetry is disturbed
and voltage is generated

8.Materials used in piezoelectricity
Natural crystals
Man-made crystals
Man-made  ceramics
Lead free piezoceramics

9.Natural crystals
1. Berlinite (AlPO4), a rare phosphate mineral that is structurally identical to quartz
2. Cane sugar
3. Quartz
4. Rochelle salt
5. Topaz
6. Tourmaline-group minerals

     1.quartz :-
     hexagonal structure,
     pure and contains traces of other elements Al3+, fe3+,Ti4+,P5+, H+, Li+, Na+, K+.
Shows piezoelectric effect perpendicular to prism axis.

PowerPoint Presentation On Vapor Absorption Refrigeration System

PPT On Vapor Absorption Refrigeration System

Vapor Absorption Refrigeration System Presentation Transcript: 

Principle of Operation
Working Fluid for Vapor Absorption Refrigeration System (VARS)
Some Experimental Results for Different Fluid
Various Designs of VARS
Cost Analysis

The basic aim of this presentation is to provide basic background and review existing literatures on VARS.
VARS is also belongs to class of vapor cycle is similar to the VCRS.
However, in VCRS mechanical work is required as input so it is called work operated cycle and unlike VCRS the required input in VARS is low grade thermal energy so, called heat operated cycle.
VARS is heat operated system then waste industrial heat or solar thermal energy can be used for it.
It helps to reduce problems related to global environmental, such as green house effect from CO2 emission from the utility plant.

3.Electricity purchased from utility plant for VCRS can be reduced.
Another major difference is VCRS commonly used CFCs as refrigerants or working fluid which is cause of ozone layer depletion that will make VARS more prominent.
VARS use natural refrigerants such as NH3 and H2O.
COP of VARS is much lower than the VCRS.
Although VARS seem to be many advantages , VCRS still dominate all market sectors.
In order to promote the use of absorption system, further development is required to improve performance and reduce cost.

Working fluid in VARS is a binary solution  consisting of refrigerant and absorbent .
Absorbent absorb refrigerant causing  pressure to reduce and rejecting some amount of heat and make a stable solution.
from fig. Refrigeration is obtained by connecting two vessel left vessel containing pure refrigerant while right containing solution of  refrigerant and absorbent.

5.This is basically an intermittent system means  continuous refrigeration effect we can not get.
As the cooling process can not be produced continuously so we need a system which gives continuous refrigeration.



8.Mechanical work is less in VARS compared to VCRS because here pump is used instead of compressor.
However, a large amount of heat is required so the solution pump work is negligible compare to it.
Now COP of VCRS and VARS is given by
            COPVCRA   =

            COPVARS    =        =

Since VARS uses heat energy  its COP is much smaller then VCRS.
Comparing of system is not fully justify by only COP as mechanical energy is more expansive than thermal energy.

9.Sometimes a second law efficiency (i.e. Ratio of actual COP to that of Carnot COP) or exergetic efficiency.
It is seen that exergetic efficiency for VARS system is of the same order as that of a VCRS system.

COPideal VARS   =         =                     =

   Thus COP of ideal VARS increases as:
Evaporator temperature increases
Generator temperature increases
Heat sink (absorber + condenser) temperature decreases.
The COP of actual VARS is much smaller than ideal because of irreversibilities.

10.Working Fluid for Vapor Absorption Refrigeration System (VARS)
Properties of Working Fluid (Refrigerant – Absorbent System):
Low viscosity to minimize pump work.
Low freezing point by which we can maintain the low evaporator temperature.
Thermal stability.
Irreversible chemical reaction of all kinds, such as decomposition, polymerization, corrosion, etc. Are to be avoided.
It must be  completely miscible both in liquid as well as in vapour as well as in    vapour phase.
In addition to above, two main thermodynamic requirements of the mixture are.

PowerPoint Presentation On Tabu Search

PPT On Tabu Search


Tabu Search Presentation Transcript:
1.Tabu Search


3.Background of TS
TS was first proposed by Glover (1986) and was also developed by Hansen (1986)

TS has its roots in methods that cross boundaries of feasibility and local optimality

Examples of such methods include use of surrogate constraints and cutting plane approaches

Tabu search (TS) is a neighborhood search method which employs "intelligent" search and flexible memory technique to avoid being trapped at local optimum.

Tabu Search (TS) is a metaheuristic that guides a local heuristic search procedure to explore the solution space beyond local optimality

4.Basic notions of TS
The word tabu (or taboo) comes from Tongan, a language of Polynesia, where it indicates things that cannot be touched because they are scared
Now it also means “a prohibition imposed by social custom”
In TS, tabu status of forbidden elements shift according to time and circumstance, based on an evolving memory

5.Basic notions of TS (cont.)
Tabu status can be overruled for a preferrable alternative
Hence TS uses adaptive (flexible) memory
TS also uses responsive exploration, i.e. exploitation of good solutions and exploration of new promising regions

Tabu search is based on the premise that problem solving, in order to qualify as intelligent, must incorporate adaptive memory and responsive exploration. The use of adaptive memory contrasts with "memoryless" designs, such as those inspired by metaphors of physics and biology, and with "rigid memory" designs, such as those exemplified by branch and bound and its AI-related cousins.  The emphasis on responsive exploration (and hence purpose) in tabu search, whether in a deterministic or probabilistic implementation, derives from the supposition that a bad strategic choice can yield more information than a good random choice.

7.Main Features
TS emulates the human problem solving process.
It takes advantage of search history.
The historical record is usually maintained for the characteristics of the moves applied, rather than the solutions visited.
Recent moved are classified as tabus to restrict the search space.
TS is a variable neighborhood method.
Tabu restrictions are not inviolable under all circumstances.
Several types of memories are used, both short term and long term, in order to improve the exploration quality.

Selectivity (including strategic forgetting)
Abstraction and decomposition (through explicit and attributive memory)
recency of events
frequency of events
differentiation between short term and long term
Quality and impact:
relative attractiveness of alternative choices
magnitude of changes in structure or constraining
regional interdependence
structural interdependence
sequential interdependence

Responsive Exploration
Strategically imposed restraints and inducements
      (tabu conditions and aspiration levels)
Concentrated focus on good regions and good solution features
      (intensification processes)
Characterizing and exploring promising new regions
      (diversification processes)
Non-montonic search patterns
     (strategic oscillation)
Integrating and extending solutions
     (path relinking)

10.Parameters of Tabu Search:
Local search procedure
Neighborhood structure
Aspiration conditions
Form of tabu moves
Addition of a tabu move
Maximum size of tabu list
Stopping rule

PowerPoint Presentation On Visible Light Communication Systems

Turbo codes based error correction scheme for dimmable visible light communication systems

Turbo codes based error correction scheme for dimmable visible light communication systems  Presentation Transcript:

1.Turbo codes based error correction scheme for dimmable visible light communication systems


Recent use of light-emitting diode (LED) in lighting becomes a prevailing trend.
In accordance with such trends, the IEEE 802.15.7 VLC task group has preceded the standardization of VLC systems.
VLC systems have a constraint that the average intensity adapts to the dimming requirement chosen by a user.
To meet this requirement, various transmission schemes at the modulation level have been presented.

4.For on-off keying (OOK) that is a widely used simple modulation method in VLC systems.
the ratio of “ON” time to “OFF” time is adjusted in a transmission frame to meet the dimming requirement.
For example, 70% “ON” is needed for a 70% dimming requirement.
Therefore, this letter proposes a dimmable turbo code-based error correction scheme that combines with puncturing and scrambling.

Visible light communication (VLC) is a data communications medium using visible light between 400 THz (780 nm) and 800 THz (375 nm).
Visible light is not injurious to vision.
The technology uses fluorescent lamps (ordinary lamps, not special communications devices) to transmit signals at 10 kbit/s, or LEDs for up to 500 Mbit/s.
Specially designed electronic devices generally containing a photodiode receive signals from such light sources.


We here consider the encoding method of the proposed turbo code-based coding scheme.
Let K be the length of information bits for transmission. Those K information bits are encoded using a trellis-based error correcting code, such as convolutional codes and turbo codes, to generate an N-bit codeword .
We use trellis-based codes for encoding since they resist the decoding performance degradation induced by puncturing .Furthermore, we employ turbo codes, as those outperform convolutional codes.
Subsequently, puncturing is applied to ensure that the resulting N-bit codeword meets the desired dimming rate d.
 For puncturing rate p, Np symbols out of N-bit codeword symbols are removed in such an arbitrary pattern known a priori to both the transmitter and the receiver that systematic information bits remain intact.

The iterative decoding algorithm developed for turbo codes allows better decoding performance over existing coding schemes devised for VLC systems.
The robustness to puncturing of a trellis-based encoding method for turbo codes facilitates the support for arbitrary dimming rates.
A single encoding/decoding method for various dimming rates leads to the ease of implementation, while existing schemes require dedicated encoding/decoding methods for different dimming rates.
This coding scheme supports various code rates, while existing schemes can have limited code rate options.


10.As the dimming rate increases from 50% , the corresponding puncturing rate increases, resulting in the rise of the code rate and the error probability. We see that the performances of high-rate code ‘TC’ are comparable to that of code ‘RM.’ This demonstrates that the performance of the proposed code is much better than that of existing coding schemes by more than 2dB or has as much as 20-fold gain in the code rate.

PowerPoint Presentation On Chip Interconnect Structure for Giga-Scale Integration VLSI ICs

PPT On Chip Interconnect Structure for Giga-Scale Integration VLSI ICs

Chip Interconnect Structure for Giga-Scale Integration VLSI ICs Presentation Transcript:
1.Novel On Chip Interconnect Structure for Giga-Scale Integration  VLSI ICs

It can be observed that the speed of the 1 mm long
1.0┬Ám  -> 100 nm technology,  20 times faster
                                           6 times slower than the MOSFET.
For 35 nm technology, the interconnect is almost 1000 times slower than the MOSFET.
 Hence, the performance of future giga-scale integration (GSI) systems will be severely restricted by
             interconnect performance.
The increase in transistor count increases the number of interconnects that need to be routed.
There is an increase in the number of metal levels with each new technology generation.
Interconnect design techniques that will reduce the impact of multilevel interconnect networks on the power,  performance and cost of the entire system.

4.Itanium family containing 1.72 billion transistors  wiring of  billion transistors increases interconnect scaling the distributed resistance-capacitance product. (wire length)

Low-K Dielectric And Low Resistivity Interconnect Material-- carbon doped silicon dioxide
Repeater Insertion

6.Types of Interconnections


8.Interconnect design techniques  will reduce the impact of interconnect networks on the
 cost of the entire system.
Wave-Pipelined Multiplexed (WPM) Routing

9.Circuit diagram of WPM Routing Technique

10.WPM Nets
The sources and the sinks of the two nets should satisfy the source-sink proximity
 The nets should have a length greater than a threshold interconnect length. Here the threshold length is determined based on the total area occupied by all the cells. Assuming a square placement for all the cells, the threshold length is assumed to be half the edge size.

PowerPoint Presentation On UWB Echo Signal

PPT On UWB Echo Signal Detection With Ultra-Low Rate

UWB Echo Signal Presentation Transcript:

1.UWB Echo Signal Detection With Ultra-Low Rate Sampling Based on Compressed Sensing
1. Introduction

2. UWB Signal processing
3. Compressed Sensing Theory
 3.1 Sparse representation of signals
 3.2 AIC (analog to information converter)
 3.3 Waveform Matched dictionary for UWB signal
4. Eco detection sysytem
5. Experimental results
6. References

ultra-wide-band (UWB) signal processing is the requirement for very high sampling rate. This is major challenge.
The recently emerging compressed sensing (CS) theory makes processing UWB signal at a low sampling rate possible if the signal has a sparse representation in a certain space.
 Based on the CS theory, a system for sampling UWB echo signal at a rate much lower than Nyquist rate and performing signal detection is proposed in this paper.

4.UWB Signal processing
ULTRA-WIDE-BAND (UWB) signal processing system is characterized by its very high bandwidth that is up to several gigahertzes. To digitize a UWB signal, a very high sampling rate is required according to Shannon-Nyquist sampling theorem,
but it is difficult to implement with a single analog-to-digital converter(ADC) chip.
To address this problem, some parallel ADCs are developed. Based on hybrid filter banks (HFBs), the use of a parallel ADCs system to sample and reconstruct UWB signal.

5.UWB Signal processing
But this parallel ADCs system faces the following difficulty.
The digital filters for signal synthesis require the exact transfer functions of the analog filters for signal analysis. This may not be possible in practice because of various uncertainties in the an advance
    CS theory introduced.

6.Compressed Sensing Theory
Traditional sampling theorem requires a band-limited signal to be sampled at the Nyquist rate. CS theory suggested that, if a signal has a sparse representation in a certain space, one can sample the signal at a rate significantly lower than Nyquist rate and reconstruct it with overwhelming probability by optimization techniques.
There are three key elements that are needed to be addressed in the use of CS theory.
  1) How to find a space in which signals have sparse representation?
2) How to obtain random measurements as samples of sparse signal?
3) How to reconstruct the original signal from the samples by optimization techniques.

7.Sparse representation of signals
Sparse representations are representations that account for most or all information of a signal with a linear combination of a small number of elementary signals called atoms. Often, the atoms are chosen from a so called over-complete dictionary. Formally, an over-complete dictionary is a collection of atoms such that the number of atoms exceeds the dimension of the signal space, so that any signal can be represented by more than one combination of different atoms.
Sparseness is one of the reasons for the extensive use of popular transforms such as the Discrete Fourier Transform, the wavelet transform and the Singular Value Decomposition.

8.AIC (analog to information converter)

9.AIC offers a feasible technique to implement low-rate “information” sampling.
It consists of three main components: a wideband pseudorandom modulator , a filter and a low-rate ADC .
The goal of pseudorandom sequence is to spread the frequency of signal and provide randomness necessary for successful signal recovery.

10.Waveform Matched dictionary for UWB signal
To obtain a sparse representation of signal in a certain space, many rules were proposed to match the signal in question and the basis functions of the space.
the use of waveform-matched rules to design a dictionary forUWBsignal.
The receiver is aware of the exact model of transmitted signal. To achieve very sparse representation of echo signals, the a priori knowledge of transmitted signal and the echo signal model should be taken into account in the design of basis or dictionary. Without regard to other interferences, such as Doppler shift, an echo signal without noise can be simply modeled as the sum of various scaled, time-shifted versions of the transmitted signal. Based on above considerations, we can construct a matched dictionary for echo signal

PowerPoint Presentation On VARIABLE SPEED DRIVES


VARIABLE SPEED DRIVES Presentation Transcript:

A variable speed drive (VSD), also known as a frequency converter, adjustable speed drive or inverter, is an electronic device that controls the characteristics of a motor’s electrical supply. Therefore, it is able to control the speed and torque of a motor, achieving a better match with the process requirements of the machine it is driving. So in applications where variable control is desirable, slowing down a motor with a VSD does reduce energy use substantially.

Save energy and improve efficiency
Process controllability
Reduced mechanical wear and shock
Improved power factor
Coordination of motion on various shafts
Easy interfacing with automation systems

 In addition to energy savings and better process control, VFDs can provide other benefits:  A VFD may be used for control of process temperature, pressure or flow without use  of a separate controller. Suitable sensors and electronics are used to interface driven equipment with VFD. Maintenance costs can be lowered, since lower operating speeds result in longer life for bearings and motors. Eliminating throttling valves and dampers also does away with maintaining these devices and all associated controls. A soft starter for motor is no longer required. Controlled ramp-up speed in a liquid system can eliminate water hammer problems.  Ability of a VFD to limit torque to a user-selected level can protect driven equipment  that cannot tolerate excessive torque.

5.Type of Drives
Mechanical variable speed drives
Hydraulic variable speed drives
Electrical variable speed drives

6. Mechanical variable speed drives
Belt and chain drives with adjustable diameter sheaves Metallic friction dr

7.Hydraulic variable speed drives
Hydrodynamic types

Hydrostatic types

8.Electrical variable speed drives
DC Motor Drives Eddy Current Motor Drives AC Motor Drives

9. The workhorse of industry is
           the “ELECTRIC MOTOR”
All of the machines mentioned earlier in the industries commonly driven by electric motors. It can be said that the electric motor is the workhorse of industrial processes. Now in this  we will take a closer look at electrical motors - especially the squirrel cage AC motor with ac drives(VFD) , which is the most common motor used in industrial processes.

A VSD works by converting the incoming electrical supply of fixed frequency into a variable frequency output. This variation in frequency allows the drive to control the way in which the motor operates — a low frequency for a slow speed, and a higher frequency for a faster speed. The output can also be changed to enable the motor to generate more or less torque as required. So, the motor and drive combination might be used for turning a large load at fairly slow speeds, or turning a lighter load at high speeds, maximising efficiency.

PowerPoint Presentation On WiMAX Security

Automated Secured Cost Effective Key Refreshing Technique to Enhance WiMAX Privacy Key Management


WiMAX Security Presentation Transcript:
1.Automated Secured Cost Effective Key Refreshing Technique to Enhance WiMAX Privacy Key Management


Related works


Proposed work


Results obtained


WiMAX (Worldwide Interoperability for Microwave Access) is an IP based 4G technology IEEE 802.16e(Mobile WiMAX) provides seamless broadband access for mobile users Security is provided by a separate security sublayer Key Management plays a vital role in WiMAX Security Synchronized & secure distribution of keying data from BS to MS – Privacy Key Management Protocol

4.WiMAX Security

5.Related Works

6.Existing Key Generation

7.Key Exchange

8.Inadequacies In Existing Work

9.Huge amount of bandwidth is utilized  Large Storage is required  Time consumption for key exchange is more Impersonation and Man in the middle attack

Automated Key Refreshing Technique is proposed in EAP based PKMv2 key generation To reduce the key exchange time & key storage
Effective utilization of bandwidth and resources
Also provide security by overcome Man in the Middle attacks and forgery attacks.

PowerPoint Presentation On Microstrip Low pass Chebyshev Filter using DGS

PPT On Enhancement Cut off Frequency of Microstrip Low pass Chebyshev Filter using DGS

Microstrip Low pass Chebyshev Filter using DGS Presentation Transcript:
1.Enhancement Cut off Frequency of Microstrip Low pass Chebyshev Filter using DGS

Objective of the proposed work
Methodology to Achieve the Objective
Filter  Designing
Fabrication of  Filter
Low pass filter with DGS
Operational  Mechanism
Simulation & Measured   Results

3.Objective of the proposed work
To achieve 2.5GHz cut off frequency of microstrip low pass Chebyshev filter using DGS.

4.Methodology To Achieve The Objective
Enhancement cut off frequency in the proposed filter is achieved by using defected ground structure.
DGS is using the structures etched in the microwave substrate ground plane. The DGS resonant characteristics are then used in filter design.
 (a)    Simulation cut off frequency 2.66 GHz
    (b)    VNA tested cut off frequency 2.715 GHz.

5.Filter Design
5th order Chebyshev Low pass filter using Insertion loss method

6.Design Specification
To design 5th order Chebyshev Low pass filter using Insertion loss method should be as follows

7.Design Specification

8.Design Specification
Impedance and frequency scaling:
For a new load impedance of Ro and cut-off frequency of ?o, the original resistance Rn ,inductance Ln and capacitance Cn are changed by the following formulae:

9.Converting into distributed elements:

10.Fabrication of microstrip filter
Photolithography steps
The pattern on the mask is transferred on the substrate by
           means of photolithography
Step1.   Clean the substrate, dry thoroughly in front of heat blower.
Step2.   Coat the substrate with photo-resist material.
Step3.   Preheat the substrate in oven at 98oC -100oC for 10 minutes.
Step4.   Now aligned the mask on substrate
Step5.   Exposed the substrate now to Ultra Violet rays for 2 minutes.

PowerPoint Presentation On Food Grains

PPT On Identification and classification of similar looking food grains

Food Grains Presentation Transcript:
1.Identification and classification of similar looking food grains

2.Problem Statement
Comparative study of ANN and SVM classifier models
taking a case study of identification and classification of
similar looking food grains is the main aim of this work.

The problem is to process images of selected type of grains and extract the features from the samples based on RGB, HSV and wavelet based texture.

Developing  the different classification models using various set of features.

Identification of similar looking grains and test their performance based on their rate of recognition.

3.Methodology for Classification of Similar Looking Grains

4.Images of Grains Samples

5.Feature Extraction
1.Color Features
RGB (Red, Green, and Blue) Color Model
Extraction of RGB features is separation of RGB components from the original Color image sample.
After separation the features viz. Mean, Standard Deviation, Variance and Range are computed.
HSV (Hue, Saturation and Value) Color Model
HSV- used to distinguish one color from another.

Hue is the angle measured from the red axis to the point  of interest .

Saturation refers to relative purity or the amount of white light mixed with a hue.

Value or Brightness embodies the chromatic notion of intensity.
2.Wavelet Texture Features 
Texture is a connected set of pixels that occur repeatedly in an image .

It provides the information about the variation in the intensity of a surface by  quantifying
properties such as smoothness, coarseness, and regularity.

Texture feature extraction is carried out by decomposing an image using discrete wavelet transfom      

6.Algorithm for  Color Feature Extraction
Input:   Original 24-bit color image.
Output: 18 color features
Step 1:  Separate the RGB components from the original 24-bit input color image.
Step 2:  Obtain the HSV components from RGB components using following equations.

7.Color and HSV Features with Values for Mustard Sample

8.Multi-level Wavelet Decomposition
Wavelet is a mathematical function used to divide a given function or continuous-time signal into different scale components.
Approximated image LL is obtained by low pass filtering in both row and column directions.
 The detailed images, LH, HL, and HH, contain high frequency components

9.Algorithm for Wavelet-based Textural Feature Extraction
Input:   Original color image.
Output: 42 features.
Step 1: Convert the original color image to gray image.
Step 2: Calculate the level-1 Wavelet transform and hence decompose the signal into
              LL, (low frequency components) and LH, HL and HH (High frequency                     
              components in horizontal, vertical and diagonal)    
Step 3: Consider vertical details only.
Step 4:  Find the Mean, Variance, Range, Energy, Homogeneity, Maximum Probability, 
              Inverse Difference Moment (IDM) .

Step 5: Copy the values obtained to a feature vector
Step 6: Repeat the steps 4 and step 5 for remaining detailed coefficients set namely,
             horizontal and diagonal coefficients.
Step 7: Repeat the steps 4 to 6 for third level decomposition and combine the feature set   
             into a single feature vector.

10.Texture Descriptors
Related Posts Plugin for WordPress, Blogger...

Blog Archive